Intel Joins AI Safety


listen to dailystraits.com - something different! podcast on goodpods

Sydney, Oct 27: Intel has officially become a member of the MLCommons AI Safety (AIS) working group, collaborating with leading AI experts from both industry and academia.
As a founding member of this initiative, Intel aims to contribute its expertise to create a flexible benchmarking platform for assessing the safety and risk factors associated with AI tools and models.
The development of standardized AI safety benchmarks is essential as AI technology continues to evolve.
These benchmarks, once established, will play a crucial role in how society approaches AI deployment and safety.
Deepak Patil, Intel’s corporate vice president and general manager for Data Center AI Solutions, emphasized the company’s commitment to advancing AI in a responsible manner.
Intel acknowledges the widespread use of large language models (LLMs) and the importance of addressing safety concerns in AI development and deployment collaboratively across the ecosystem.
Responsible training and deployment of LLMs and AI tools are essential in managing the societal risks posed by these powerful technologies.
Intel has been proactive in recognizing the ethical and human rights implications tied to technology development, particularly in the AI domain.
The AI Safety working group, organized by MLCommons, brings together a diverse group of AI experts to develop a platform and a pool of tests that will support AI safety benchmarks for various use cases. Intel’s involvement in this working group aligns with the company’s ongoing efforts to responsibly advance AI technologies.
Intel plans to share its findings on AI safety and best practices, including processes for responsible development, such as red-teaming and safety tests.
The collaboration will result in a common set of best practices and benchmarks for evaluating the safe development and deployment of generative AI tools, especially those leveraging LLMs.
The working group’s initial focus will be on creating safety benchmarks for LLMs, building upon the work of researchers at Stanford University’s Center for Research on Foundation Models and its Holistic Evaluation of Language Models (HELM).
For more information about participating members, visit the MLCommons website. Intel’s participation underscores its commitment to driving AI innovation while prioritizing safety and ethical considerations.

Leave a Reply