BSV
$68.98
Vol 206.1m
-6.48%
BTC
$99102
Vol 110910.86m
2.33%
BCH
$497.14
Vol 2225.05m
3.02%
LTC
$91.07
Vol 1465.53m
5.68%
DOGE
$0.39
Vol 10211.34m
2.27%
Getting your Trinity Audio player ready...

Microsoft Research and Peking University researchers have reached a new milestone in their attempts to teach OpenAI’s GPT-4 how to operate within the Android operating system.

In a joint report, the study achieved relative success in finetuning large language models (LLMs) to operate autonomously in a specific operating system. While generative artificial intelligence (AI) has found myriad use cases, the technology has found it challenging to work within the borders of an operating system without human interference.

The study highlighted several reasons for generative AI’s inability to explore Android autonomously, including the reliance on reinforcement training. Most LLMs use trial and error to explore a new environment, setting the stage for security issues in their application.

“Firstly, the action space is vast and dynamic,” the report read. “Secondly, real-world tasks often require inter-application cooperation, demanding farsighted planning from LLM agents. Thirdly, agents need to identify optimal solutions aligning with user constraints, such as security concerns and preferences.”

To solve the challenges, the research team initiated AndroidArena, which was designed as a training environment for LLMs to explore the Android operating system. Preliminary studies highlighted new flaws in the way of autonomous exploration for LLM, focusing primarily on understanding and reasoning.

As the experiments within the AndroidArena proceeded, the researchers noted additional challenges to reflection and exploration by models.

While exploring potential solutions, the team eventually settled for prompting LLMs with detailed information on previous attempts to reduce incidents of errors. By embedding previous memories in prompts, the researchers recorded a 27% spike in accuracy when operating Android systems.

The solution yielded positive results when extended to other LLMs, including Google’s Bard (NASDAQ: GOOGL) and Meta’s LLaMA 2 (NASDAQ: META), with researchers optimistic for new iterations to demonstrate advanced functionalities.

Optimizing AI one feature at a time

While generative AI has enjoyed mass adoption rates, researchers are scrambling behind the curtains to fix several problems associated with the offering. One study by Anthropic AI focused on stifling incidents of sycophancy in LLMs and earned plaudits from industry players, while AutoGPT and Microsoft (NASDAQ: MSFT) are testing an AI monitoring tool to flag harmful real-world outputs.

“We design a basic safety monitor that is flexible enough to monitor existing LLM agents, and, using an adversarial simulated agent, we measure its ability to identify and stop unsafe situations,” said the Microsoft-backed research.

Other studies are focused on merging blockchain technology with AI, while some are pursuing labeling AI-generated content to stifle the proliferation of deepfakes.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

Recommended for you

Blockchain enables autonomous AI agents to learn
Utilizing blockchain tech, a group of Belgian scientists enabled autonomous AI agents to learn and communicate securely, contributing to the...
September 17, 2024
WhatsOnChain gets own UTXO endpoints for BSV blockchain services
With ElectrumX set to retire in October, WhatsOnChain is gearing up to implement a new UTXO set of API endpoints,...
September 16, 2024
Advertisement