Close Menu
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
What's Hot

XRP Briefly Overtakes Tether to Become 3rd Largest Cryptocurrency

May 13, 2025

🔴 24/7 LIVE CAT TV NO ADS😺 Red Squirrels & Awesome Birds🕊️ Forest Clowns on the Ground

May 13, 2025

How to pre-order the Samsung Galaxy S25 Edge

May 13, 2025
Facebook X (Twitter) Instagram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
Facebook X (Twitter) Instagram
KittyBNK
  • Home
  • Crypto News
  • Tech News
  • Gadgets
  • NFT’s
  • Luxury Goods
  • Gold News
  • Cat Videos
KittyBNK
Home » Intel joins the MLCommons AI Safety Working Group
Gadgets

Intel joins the MLCommons AI Safety Working Group

October 27, 2023No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Intel joins the MLCommons AI Safety Working Group
Share
Facebook Twitter LinkedIn Pinterest Email

Intel is making strides in the field of artificial intelligence (AI) safety and recently become a founding member of the AI Safety (AIS) working group, organized by MLCommons. This marks a significant step in Intel’s ongoing commitment to responsibly advancing AI technologies.

What is the MLCommons AI Safety Working Group?

The MLCommons AI Safety Working Group has a comprehensive mission to support the community in developing AI safety tests and to establish research and industry-standard benchmarks based on these tests. Their primary goal is to guide responsible development of AI systems, drawing inspiration from how computing performance benchmarks like MLPerf have helped to set concrete objectives and thereby accelerate progress. In a similar vein, the safety benchmarks developed by this working group aim to provide a clear definition of what constitutes a “safer” AI system, which could significantly speed up the development of such systems.

Another major purpose of the benchmarks is to aid consumers and corporate purchasers in making more informed decisions when selecting AI systems for specific use-cases. Given the complexity of AI technologies, these benchmarks offer a valuable resource for evaluating the safety and suitability of different systems.

Additionally, the benchmarks are designed to inform technically sound, risk-based policy regulations. This comes at a time when governments around the world are increasingly focusing on the safety of AI systems, spurred by public concern.

To accomplish these objectives, the working group has outlined four key deliverables.

  1. They curate a pool of safety tests and work on developing better testing methodologies.
  2. They define benchmarks for specific AI use-cases by summarizing test results in an easily understandable manner for non-experts.
  3. They are developing a community platform that will serve as a comprehensive resource for AI safety testing, from registering tests to viewing benchmark scores.
  4. They are working on defining a set of governance principles and policies through a multi-stakeholder process to ensure that decisions are made in a trustworthy manner. The group holds weekly meetings to discuss these topics and anyone interested in joining can sign up via their organizational email.

Other articles we have written that you may find of interest on the subject of artificial intelligence:

AIS working group

The AIS working group is a collective of AI experts from both industry and academia. As a founding member, Intel is set to contribute its vast expertise to the creation of a platform for benchmarks that measure the safety and risk factors associated with AI tools and models. This collaborative effort is geared towards developing standard AI safety benchmarks as testing matures, a crucial step in ensuring AI deployment and safety in society.

One of the key areas of focus for the AIS working group, and indeed for Intel, is the responsible training and deployment of large language models (LLMs). These powerful AI tools have the capacity to generate human-like text, making them invaluable across a range of applications from content creation to customer service. However, their potential misuse poses significant societal risks, making the development of safety benchmarks for LLMs a priority for the working group.

To aid in evaluating the risks associated with rapidly evolving AI technologies, the AIS working group is also developing a safety rating system. This system will provide a standardized measure of the safety of various AI tools and models, helping industry and academia alike to make informed decisions about their use and deployment.

“Intel is committed to advancing AI responsibly and making it accessible to everyone. We approach safety concerns holistically and develop innovations across hardware and software to enable the ecosystem to build trustworthy AI. Due to the ubiquity and pervasiveness of large language models, it is crucial to work across the ecosystem to address safety concerns in the development and deployment of AI. To this end, we’re pleased to join the industry in defining the new processes, methods and benchmarks to improve AI everywhere,” said Deepak Patil, Intel corporate vice president and general manager, Data Center AI Solutions.

Intel’s participation in the AIS working group aligns with its commitment to the responsible advancement of AI technologies. The company plans to share its AI safety findings, best practices, and responsible development processes such as red-teaming and safety tests with the group. This sharing of knowledge and expertise is expected to aid in the establishment of a common set of best practices and benchmarks for the safe development and deployment of AI tools.

The initial focus of the AIS working group is to develop safety benchmarks for LLMs. This effort will build on research from Stanford University’s Center for Research on Foundation Models and its Holistic Evaluation of Language Models (HELM). Intel will also share its internal review processes used to develop AI models and tools with the AIS working group. This collaboration is expected to contribute significantly to the establishment of a common set of best practices and benchmarks for the safe development and deployment of generative AI tools leveraging LLMs.

Intel’s involvement in the MLCommons AI Safety working group is a significant step in the right direction towards ensuring the responsible development and deployment of AI technologies. The collaborative efforts of this group will undoubtedly contribute to the development of robust safety benchmarks for AI tools and models, ultimately mitigating the societal risks posed by these powerful technologies.

Source and Image Credit :  Intel

Filed Under: Technology News





Latest Geeky Gadgets Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Sporty Enyaq vRS Models Crown Škoda’s EV Range

May 12, 2025

How to Clear Safari History on iPhone and iPad

May 12, 2025

How to Use Perplexity iOS Voice Assistant for Maximum Productivity

May 12, 2025

Top iOS 18 Features to Boost iPhone Privacy

May 12, 2025
Add A Comment
Leave A Reply Cancel Reply

What's New Here!

Mercedes CLE 450 Cabriolet First Drive Review: Best luxury convertible for most drivers

April 18, 2024

Modular coffee scales for the perfect brew $99

February 23, 2024

The Evolution of Cinema: Gala Film Web3 Model for Enhanced Viewing

February 14, 2024

Improve your productivity by automating Excel tasks with AI

September 1, 2024

Microsoft Planner Project Manager AI Reduces Costs & Saves Time

January 11, 2025
Facebook X (Twitter) Instagram Telegram
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms of Use
  • DMCA
© 2025 kittybnk.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.