UK’s advance towards AI Welfare results in low Reliability
UK’s advance towards AI Welfare results in low Reliability
Share:

New Delhi:- The UK government has spent recent weeks trying to foster its image as an international powerhouse in the burgeoning AI security sector. Last month, he dropped a promise to spend £100m on a basic mockup task force to conduct "state of the art" AI security research, along with a splashy announcement of a summit on the issue.

But the government, led by British Prime Minister and Silicon Valley superfan Rishi Sunak, has sidestepped the need for new domestic legislation regulating AI applications, embracing its own innovations. Expressed in a promotional policy document.

The company is also in the process of enforcing the deregulation of the National Privacy Framework, which risks jeopardizing the security of AI.

Also Read:- Unveiling the Rich Heritage of Indian Knowledge Tradition: Exploring Education and Studies

The latter is one of several conclusions published by the independent research-based Ada Lovelace Institute, part of the non-profit Nuffield Foundation, in a new report examining the UK's approach to regulating AI. To read for a minister who is one and sounds diplomatic, but at times seems very awkward. 

The report contains no less than 18 recommendations for improving government policy/credibility in this area if the UK wants this issue taken seriously.

The institute puts forward an "expensive" definition of AI security, one that "reflects the range of harms that arise as AI systems become more powerful and embedded in society." So the report looks at the regulations that "AI systems can cause today." We call them actual AI damage. (Unlike the sci-fi-inspired theoretical possibilities risk that some prominent tech insiders have recently hyped, apparently trying to get the attention of policymakers.)  

So far, it's fair to say that the Sunak government's approach to regulating (real-world) AI security is contradictory. The focus is on industry-backed publicity hype claiming a commitment to safety, but policies that set practical rules to guard against the patchwork of risks and harms we're talking about. Little attention has been paid to the proposal. Application of automation.

Also Read:- Russia's Exit from UN Deal Impacts Global Food Economy

The report's extensive list of recommendations also reveals that the Institute sees much room for improvement in the UK's current approach to AI.

Earlier this year, the government announced its preferred approach to regulating AI in the country, saying it didn't see the need for new laws or regulators at this time. Instead, the white paper presents a flexible set of principles that governments suggest to existing sector-specific (and/or cross-sectoral) regulators to “interpret and apply this AI within their capabilities.” bottom. Without new legal powers and additional measures to monitor new uses of AI. 

The five principles described in the whitepaper are security and robustness. Appropriate transparency and explainability. justice; accountability and governance; Nullification and Remedy. All sounds great on paper, but paper alone is clearly not enough when it comes to regulating AI security.

The UK plan to let existing regulators decide what to do with AI aims only at some broad principles and allocates no new resources, with MPs agreeing on a risk-based framework. In contrast to the busy EU agenda. We want executives to propose in 2021.

It seeks to impose new responsibilities on overburdened incumbent regulators to oversee AI development in their home environments, without the power to enforce findings against malicious actors. The UK's tight budget approach does not seem very reliable when it comes to AI, to say the least - security. 

Also Read:- The Brexit Referendum: A Historic Decision that Shaped the United Kingdom

It doesn't even seem a coherent strategy if you're shooting for being pro-innovation, either — since it will demand AI developers consider a whole patchwork of sector-specific and cross-cutting legislation, drafted long before the latest AI boom. Developers may also find themselves subject to oversight by a number of different regulatory bodies (however weaksauce their attention might be, given the lack of resources and legal firepower to enforce the aforementioned principles). So, really, it looks like a recipe for uncertainty over which existing rules may apply to AI apps. (And, most probably, a patchwork of regulatory interpretations, depending on the sector, use-case and oversight bodies involved, etc. Ergo, confusion, and cost, not clarity.)

Even if existing UK regulators do quickly produce guidance on how they will approach AI — as some already are or are working to — there will still be plenty of gaps, as the Ada Lovelace Institute's report also points out — since coverage gaps are a feature of the UK's existing regulatory landscape. Proposals to simply extend this approach, therefore, suggest that regulatory disagreements will take hold and even amplify as the use of AI expands/explodes in all areas. 

Also Read:- India and UK Set to connect their space clusters

United Kingdom to start using AI for safety and security results in low credibility as it also has some shortcomings in it as said by Prime Minister Rishi Sunak.

Join NewsTrack Whatsapp group
Related News