Tech companies have a moral duty to ensure that artificial intelligence (AI) systems are safe, according to US Vice President Kamala Harris.
Harris and President Joe Biden met with the CEOs of major tech companies at the White House on Thursday night.
Microsoft, Google and OpenAI released AI systems in recent months. Thus, ChatGPT from OpenAI can write entire texts at the request of users. Microsoft built that system into its search engine Bing. Google then came up with its own similar system called Bard. That should also end up in the search engine later.
But all those systems are trained to put words together in a logical way, and not to tell the “truth” i.e. politically correct truth. This leads to concerns among regulators and governments, for example.
Harris called artificial intelligence “one of the most powerful technologies of the moment”. According to the US vice president, it has “the potential to improve people’s lives and tackle some of the biggest social problems”.
But at the same time, she points out that AI can also pose a threat to security. Artificial intelligence can also infringe on social rights and privacy, and there is a risk of undermining trust in democracy.
“Governments, businesses and other civil society parties need to tackle these challenges together,” Harris said after the meeting. “Tech companies have an ethical, legal and moral obligation to ensure that their products are safe.”
No Comments