UK Science, Innovation and Technology Committee Report on AI

As previously reported for the IPKat here, the UK Science, Innovation and Technology Select Committee recently conducted an inquiry into the impact of AI on several sectors. This Kat gave evidence at the Inquiry in the session focusing on the impact of AI on copyright and the creative industries (you can watch the session here). 

Today, the Select Committee has published its interim report - The Governance of Artificial Intelligence: Interim Report (Ninth Report of Report of Session 2022-23, HC 1769). The report sets out the Committee’s findings from its inquiry so far, noting that:

The technology should not be viewed as a form of magic or as something that creates sentient machines capable of self-improvement and independent decisions. It is akin to other technologies: humans instruct a model or tool and use the outputs to inform, assist or augment a range of activities.

The report identifies the challenges of AI in general and twelve essential challenges that AI governance must address if public safety and confidence in AI are to be secured. These are:

The twelve challenges of AI governance that must be addressed by policymakers:

1. Bias: AI can introduce or perpetuate biases that society finds unacceptable.

Image:  Riana Harvey

2. Privacy: AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants.

3. Misrepresentation: AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions or character.

4. Access to Data: The most powerful AI needs very large datasets, which are held by few organisations.

5. Access to Compute: The development of powerful AI requires significant compute power, access to which is limited to a few organisations.

6. Black Box: Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements.

7. Open-Source: Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms.

8. Intellectual Property and Copyright: Some AI models and tools make use of other people's content: policy must establish the rights of the originators of this content, and these rights must be enforced.

9. Liability: If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done.

10. Employment: AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption.

11. International Coordination: AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking.

12. Existential: Some people think that AI is a major threat to human life. If that is a possibility, governance needs to provide protections for national security.

The Intellectual Property and Copyright Challenge

This section of the report highlighted that, whilst the use of AI models and tools have helped create revenue for the entertainment industry in areas such as video games and audience analytics, concerns were raised during the inquiry about the use of copyright-protected material without permission. It noted that The UK Intellectual Property Office has begun developing a voluntary code of practice on copyright and AI, in consultation with the technology, creative and research sectors. The Government has said that if agreement is not reached or the code not adopted, it may legislate.

In relation to the proposed extension to the copyright exception for data mining, the report noted that the Government withdrew its proposal following criticism from the creative industries, highlighting that “these competing incentives define the intellectual property and copyright challenge.”

Overall, the report urges the UK Government to accelerate its efforts and legislate on the governance of AI in this new session of Parliament, since a general election expected in 2024 is likely which would delay the enactment of any new legislation until 2025. You can read the full report here.

UK Science, Innovation and Technology Committee Report on AI UK Science, Innovation and Technology Committee Report on AI Reviewed by Hayleigh Bosher on Thursday, August 31, 2023 Rating: 5

2 comments:

  1. I don't like leaving comments which are entirely negative, but the report simply reads like the results of a Google search, and a selection of different points are discussed in a superficial way (as if at a dinner party). There is no real discussion of the harm that AI could do in surveillance, buying selling stocks/assets (and so destabilising the financial system), being part of the judicial/legal process (and so not understanding human factors in crime), controlling lethal military equipment, in teaching young children. This is too important a subject for the UK to take such an amateurish approach. There is a huge amount of research on the negative impacts of AI which is not discussed or properly recognised here, and that makes this report very unhelpful in guiding policy

    ReplyDelete
  2. @Santa
    The objective stressed in the report is for the UK to position itself as a world player in respect of AI regulation, with particular emphasis on flexibility and understanding of big tech firms’ interests. It may be assumed that there is efficient communication between Nick Clegg, current president of global affairs at Meta, and UK political officials – both in government and Parliament.

    As you point out, the report is general to the point of hollowness. Look for example at the « privacy challenge ». The report is remarkably silent on the adaptation of GRDP, although the issue is clearly highly relevant to the business models of AI and of prime significance.

    A recent article (« Car driver data grab presents privacy nighmare », Guardian 6 Sept 2023 ) sheds unpalatable light on the commercial use and financial benefits to carmakers of the data related to users of AI-driven cars in respect of behavioural patterns, medical condition, personal preferences (video, music,..) and even sometimes sexual life. Such data have immense value. Benefits can be reaped according to various business models. And as the article explains, it is difficult in practice for car buyers to « opt out » and prevent the recording and commercial use of their personal data, regardless of whether the carmakers’ practices comply with applicable privacy law.

    As the use of connected devices broadens to all areas, the scope of personal user data raising privacy issues is certain to broaden as well.

    ReplyDelete

All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.

It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.

Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html

Powered by Blogger.