Key members of the House Science Committee have expressed reservations about a proposed research collaboration on artificial intelligence (AI) between the National Institute of Standards and Technology (NIST) and RAND Corp. The bipartisan group of lawmakers sent a letter to NIST on December 14, criticizing the agency for lacking transparency and failing to announce a competitive process for planned research grants related to the newly established U.S. AI Safety Institute.
The letter highlighted concerns about the transparency and quality of AI safety research conducted by external organizations, citing a tendency to “hide behind secrecy” and a lack of evidence for claims. The lawmakers, including House Science Chair Frank Lucas and ranking member Zoe Lofgren, emphasized the need for thorough research rather than rushed initiatives.
While NIST has not disclosed the organizations set to receive research grants through the AI Safety Institute, sources indicate that RAND is one of them. RAND recently released a report on biosecurity risks associated with advanced AI models, which the House letter mentioned as an example of research that lacked academic peer review.
After the publication of this story, a RAND spokesperson disputed the characterization of the report, claiming it went through a rigorous quality assurance process, including peer review.
The concerns raised by the House Science Committee center on NIST’s resource limitations and the need for external assistance to fulfill its expanding AI mandate. The lawmakers urged NIST to prioritize scientific merit and transparency, emphasizing the importance of holding recipients of federal research funding to rigorous scientific and methodological standards.
NIST, a low-profile agency within the Commerce Department, plays a central role in President Joe Biden’s AI plans, including the establishment of the AI Safety Institute. While NIST has not confirmed specific partnerships, it stated that it is exploring options for a competitive process for cooperative research opportunities.
The House Science Committee’s reservations are linked to RAND’s association with Open Philanthropy, a major funder of effective altruist causes. The lawmakers warned NIST about potential conflicts of interest and urged the agency to maintain scientific independence in its AI initiatives.
Despite these concerns, the AI community views this development as a sign of increased awareness on Capitol Hill regarding the importance of measurement and governance in regulating AI. The House Science Committee emphasized the need for a comprehensive and transparent approach to AI research, reflecting a growing understanding of the complexities involved in defining AI governance.
Related topics:
What is Computer Intelligence & How does Computer Intelligence work