More

    Labour Proposes Mandatory Disclosure of AI Road Test Results

    The Labour Party is advocating for a shift from voluntary agreements to statutory regulations, compelling artificial intelligence (AI) companies to share the outcomes of road tests conducted on their technologies. Expressing concern over the lack of oversight on social media platforms, the party aims to replace the existing voluntary testing arrangement between tech firms and the government with a mandatory framework.

    Peter Kyle, the shadow technology secretary, emphasized the need for legislators and regulators to stay ahead of developments in technology. Drawing parallels with the delayed response to social media issues, he asserted that Labour would not repeat the same mistake with AI. Kyle insisted on greater transparency from tech companies, particularly in the aftermath of the tragic murder of Brianna Ghey.

    Speaking on BBC One’s Sunday with Laura Kuenssberg, Kyle explained the proposed shift from a voluntary code to a statutory code. Under a Labour government, AI companies engaged in research and development would be obligated to release all test data and provide detailed information about their testing objectives. This move aims to enhance visibility into the development of AI technology.

    In November, at the global AI safety summit, Rishi Sunak, the Chancellor, reached a voluntary agreement with leading AI firms, including Google and OpenAI, to collaborate on testing advanced AI models pre and post-deployment. However, Labour’s proposal involves a statutory requirement for AI firms to inform the government about their plans to develop AI systems exceeding a certain level of capability. Additionally, they would need to conduct safety tests with independent oversight.

    The EU and 10 countries, including the US, UK, Japan, France, and Germany, endorsed the AI summit testing agreement. Notable tech companies, such as Google, OpenAI, Amazon, Microsoft, and Meta, have committed to testing their AI models as part of this initiative.

    Peter Kyle, currently in the US engaging with Washington lawmakers and tech executives, highlighted that the test results would play a crucial role in the newly established UK AI Safety Institute’s efforts to independently scrutinize advancements in AI. He emphasized the profound impact that AI technology could have on the workplace, society, and culture and stressed the importance of ensuring safe development.

    In other news, the article briefly touches on the current political landscape and global issues, urging readers to support open, independent journalism through contributions to The Guardian for continued comprehensive coverage.

    Recent Articles

    TAGS

    Related Stories