Getting your Trinity Audio player ready... |
As the pace of artificial intelligence (AI) innovation proceeds at a breathtaking rate, the European Union has become the first region to pen rules that would provide regulatory clarity to developers and businesses intending to use AI technology.
The AI Act received provisional approval on Tuesday, with two key groups, the Committee on Civil Liberties, Justice and Home Affairs (LIBE) and the Committee on the Internal Market and Consumer Protection (IMCO), both approving it.
While the official vote won’t happen in the EU Parliament until spring, it’s expected to sail through since it received a favorable vote at the provisional stage.
What’s in the EU AI Act?
The AI Act will require developers to conduct risk assessments, placing AI capabilities in four risk categories: minimal, limited, high, and unacceptable. It would also ban predictive policing, while generative apps like ChatGPT will receive minimal oversight.
There are also multiple banned AI applications, including biometric categorization systems that use sensitive characteristics, untargeted scraping to create facial recognition databases, emotion recognition in the workplace or school, social scoring based on behavior or personal characteristics, systems that manipulate human behavior and circumvent their free will, and AI systems used to exploit vulnerabilities.
There are some exceptions for law enforcement, such as the ability to use biometric identification systems to perform targeted searches for people suspected or convicted of a crime. This would be subject to prior judicial authorization and can only be used for strictly defined lists of crimes.
Is the EU over-regulating again?
It’s no secret that the EU’s economy has been lagging behind the U.S. for decades. Some critics suggest that over-regulation drives startups away from the continent and places undue burdens on would-be entrepreneurs. Indeed, the advocacy group Digital Europe has made exactly that criticism of the AI Act.
However, Gartner’s Nader Henein, an information privacy advocate and researcher, has said the rules “are in no way a hindrance to innovation.” He emphasized that innovation finds a way to work within regulatory bounds and turn them into an advantage. He said adoption of the rules should be “straightforward.”
While the rules will no doubt need to be updated in the future as the pace of AI development proceeds faster than anyone can comprehend, the AI Act is a start. Along with the EU’s Markets in Crypto Assets (MiCA) regulations and others related to blockchain and digital currencies, it’s proceeding full steam ahead toward its Digital Decade targets.
Speaking of blockchains…
Describing the AI Act online, the EU said its priorities were ensuring that AI systems used within its borders are safe, transparent, traceable, non-discriminatory, and environmentally friendly. It emphasizes that “AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”
Given its deep dive into blockchain technology and digital currencies, the EU may already know how this technology is tailor-made to ensure safe, transparent AI development. Distributed digital ledgers can create a tamper-proof, time-stamped evidence trail to ensure the integrity of the data AI models use and to create accountability for developers.
Mitigating the risks of AI through transparency
While those who dislike government regulation of any kind will have a problem with whatever regulations governments propose, it can’t be denied that some accountability and risk management are necessary when it comes to AI.
The potential benefits of AI are well-known and promoted widely, including scientific breakthroughs, new inventions, and more. However, as with the Internet, every tool has the potential to be used for evil; predictive policing, AI-generated deep fakes, and systemic risk to financial markets are just some of the ways AI could go wrong if it falls into the wrong hands or we lose control of it.
If safety, transparency, and traceability are the goals, and they should be, scalable public blockchains like BSV are the best tools for the job. Knowing what happened, when, and who is responsible is imperative if regulations to govern AI are to have any teeth.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Micropayments are what are going to allow people to trust AI
Recommended for you
Lorem ipsum odor amet, consectetuer adipiscing elit. Elit torquent maximus natoque viverra cursus maximus felis. Auctor commodo aliquet himenaeos fermentum
Lorem ipsum odor amet, consectetuer adipiscing elit. Accumsan mi at at semper libero pretium justo. Dictum parturient conubia turpis interdum