BSV
$67.4
Vol 217.13m
-4.36%
BTC
$98070
Vol 118553.15m
3.47%
BCH
$485.26
Vol 2283.59m
8.42%
LTC
$89.23
Vol 1431.55m
5.9%
DOGE
$0.38
Vol 9415.84m
-0.13%
Getting your Trinity Audio player ready...

Researchers at Google DeepMind (NASDAQ: GOOGL) and Google Research have proposed a new method to extend the capabilities of artificial intelligence (AI) models by interlinking them with other existing AI systems.

The research scored impressive results in early studies using the novel Composition to Augment Language Model (CALM) developed by the researchers. According to a 17-page
report, the model allows AI researchers to augment existing large language models (LLMs) with new capabilities.

“Foundational models with billions of parameters which have been trained on large corpora of data have demonstrated non-trivial skills in a variety of domains,” read the report. “However, due to their monolithic structure, it is challenging and expensive to augment them or impart new skills.”

Rather than training new LLMs with new capabilities from scratch, the new method enables new functionalities in existing models by augmenting with their peers. The researchers submit that promoting interoperability via CALM will be an endeavor, saving time and cost to “enable newer capabilities.”

Through CALM, LLMs can preserve their existing functionalities while unlocking applications in new domains without the hassle of fresh fine-tuning or retraining processes.

The researchers arrived at their conclusions by augmenting Google’s PaLM2-S, an LLM touted to possess the same functionalities as OpenAI’s ChatGPT-4, with smaller AI models. In their submission, the new hybrid model demonstrated a significant improvement above baseline in coding and translation tasks.

“Similarly, when PaLM2-S is augmented with a code-specific model, we see a relative improvement of 40% over the base model for code generation and explanation tasks—on-par with fully fine-tuned counterparts,” said the researchers.

The research has several applications for the fledgling area of generative AI, including potential use cases in LLMs without English support and providing an answer to AI’s scaling and copyright issues.

Increasing research to enhance AI capabilities

In early 2023, researchers at Austria’s University of Innsbruck uncovered a new metric to measure temporal validity in complex statements for LLMs designed to improve chatbot capabilities.

Another research uncovered a streak of mainstream AI models favoring sycophancy over factual responses as a result of their training methods, while others are probing an integration between AI and blockchain technology.

Despite the pace of AI research, developers have to grapple with copyright complaints and the grim prospects of regulatory enforcement in the fledgling sector.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI takes center stage at London Chatbot Summit

Recommended for you

Blockchain enables autonomous AI agents to learn
Utilizing blockchain tech, a group of Belgian scientists enabled autonomous AI agents to learn and communicate securely, contributing to the...
September 17, 2024
WhatsOnChain gets own UTXO endpoints for BSV blockchain services
With ElectrumX set to retire in October, WhatsOnChain is gearing up to implement a new UTXO set of API endpoints,...
September 16, 2024
Advertisement