0.32000000000005°C
Hilliard
11-21-2024
BSV
$68.42
Vol 179.37m
-0.82%
BTC
$96675
Vol 114781.96m
2.05%
BCH
$482.79
Vol 2035.3m
7.27%
LTC
$88.48
Vol 1431.06m
2.54%
DOGE
$0.38
Vol 10317.91m
-1.95%
Getting your Trinity Audio player ready...

The artificial intelligence (AI) industry is a fast-moving space. Innovators, governments, and everyone in between currently have their attention fixed on the products and services that AI fuels. In many cases, AI can significantly improve business operations and our quality of life, but no revolutionary technology comes without risks that have the potential to cause harm.

Here are a few significant events that took place in AI last week:

Apple’s Siri is getting an AI update

At its Worldwide Developer Conference on June 10, Apple (NASDAQ: AAPL) announced a partnership with OpenAI to bring new and improved AI features to its devices. OpenAI will be integrated with Apple’s Siri, the virtual voice assistant launched in 2011 that never gained significant popularity among users. The addition of OpenAI’s features to Siri could revitalize the voice assistant, providing users with a faster and more efficient way to access ChatGPT and its outputs without needing to open or download ChatGPT on their phones.

Beyond the AI capabilities from the OpenAI partnership, Apple has announced new AI integrations that users can expect in the future. Rather than calling these additions “artificial intelligence,” Apple has coined them “Apple Intelligence.” Apple Intelligence will enable transcription for phone calls, AI photo retouching, and improvements in Siri’s natural conversation flow. The software can also summarize notifications, text messages, articles, documents, and open web pages.

Whether Apple users will embrace OpenAI-enabled Siri and the upcoming Apple Intelligence features remains to be seen. However, this partnership may benefit OpenAI more than Apple. With an estimated 1.5 billion iPhone users, OpenAI will likely see a significant increase in its service usage via Siri. The more users an AI service has, the better it can train and improve its models based on user data.

Pope Francis addresses ethical AI use at G7 Summit

In the first G7 Summit to feature a Pope as an invited participant, Pope Francis delivered a message addressing the use of artificial intelligence. At the core of the Pope’s message to the G7 group was the emphasis on ethics and morality in AI system creation and use. He urged the lawmakers in the seven nations that make up the group—the U.S., Canada, France, Germany, Italy, Japan, and the United Kingdom—to ensure AI remains human-centric, especially in making important decisions, such as when to use weapons, which he says should be made by humans and not machines.

“We would condemn humanity to a future without hope if we took away people’s ability to make decisions about themselves and their lives by dooming them to depend on the choices of machines,” he said. “We need to ensure and safeguard a space for proper human control over the choices made by artificial intelligence programs: Human dignity itself depends on it.”

This is not the first time the Pope has spoken about AI. In December, he emphasized the necessity for an international treaty to guide the ethical development of AI, highlighting the risks associated with technology “devoid of human values.”

The Pope’s latest AI announcement underscores the need for global leaders to address AI in various public settings. It signals to the world that they are aware of and actively managing the world’s leading technology and its potential societal impacts. Additionally, given AI’s current popularity, discussing it can bring more attention to an individual’s platforms, which is often a goal for global thought leaders and lawmakers.

Will Nvidia’s 10-for-1 stock split fuel an AI Investment surge?

On Monday, Nvidia (NASDAQ: NVDA), the dominant player in the market for AI chips, conducted a 10-for-1 stock split. This means that for every share of Nvidia an individual owned, they now own 10 shares (e.g., if you owned 10 shares pre-split, you would own 100 shares post-split); when a stock splits, the company’s share price is also fractionalized by the same magnitude.

For the past two years, alongside the rise of artificial intelligence, came surges in the valuations of computer chip manufacturers who make the chips that AI providers desperately need to train and run their models. Nvidia, for example, is up 167% year to date.

With a 10-to-1 stock split, Nvidia is likely to continue its uptrend. Although a stock split has no tangible impact on a company’s value, it can have a psychological effect on market participants. Buying one share of Nvidia at its post-split price of $131 feels less intimidating than buying one share at $1310. This will likely lead to an influx of individuals buying Nvidia, including newcomers who thought they couldn’t afford it at its pre-split price, driving its valuation up further.

Meta forced to delay EU launch over privacy concerns

Meta (NASDAQ: META) had to halt training its in-development AI bot in Europe after receiving a request from the Irish Data Protection Commission (DPC). The issue centered around Meta using personal data to train its large language models (LLMs). Although Meta gave users the option to opt out of their data being used for this purpose, the DPC requested that Meta halt the training of their models with user data, and Meta complied.

Understandably, Meta is not happy about having to halt their training.

“This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe,” Stefano Fratta, Global Engagement Director of Meta Privacy Policy, said. “Put simply, without including local information we’d only be able to offer people a second-rate experience. This means we aren’t able to launch Meta AI in Europe at the moment.”

This incident highlights the EU’s stance toward AI and its impact on companies looking to operate in the region. Because the EU takes a more aggressive stance on residents’ personal data, tech giants, known for using customer data for various business activities, are often stifled and limited when operating in the EU. This significantly affects AI companies, which require large amounts of data to train, test, and fine-tune their models. As time passes, it will be interesting to see how these tech giants offering AI solutions navigate the EU’s AI Act and whether they decide it is more beneficial to comply with stringent regulations or not have a presence in the EU.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: How Artificial Intelligence cures the world’s loneliness epidemic

Recommended for you

Sch. Post test

Lorem ipsum odor amet, consectetuer adipiscing elit. Elit torquent maximus natoque viverra cursus maximus felis. Auctor commodo aliquet himenaeos fermentum

November 7, 2024
Post with chaching

Lorem ipsum odor amet, consectetuer adipiscing elit. Accumsan mi at at semper libero pretium justo. Dictum parturient conubia turpis interdum

November 4, 2024
Advertisement