Artificial intelligence: security risk due to hallucinating

Because AI models are still prone to errors, special security standards must be observed for their use in production plants. Instead of “deserted factories”, new human-machine cooperations are therefore more likely to develop.

Artificial intelligence is generally said to have a great future in business life. But after the initial euphoria, the remaining weaknesses of “machine learning” are becoming more apparent. In particular, “hallucinating”, i.e. incorrect answers and fictitious statements, are still among the major weaknesses of artificial intelligence: despite “deep learning”, chat GPT & Co suddenly invent adventurous calculation results in their answers, sprinkle in explanations that seem deceptively real, or checkmate the opponent in a game of chess by moving a piece that is not even on the board.

Read full article: https://www.austriainnovativ.at/singleview/article/kuenstliche-intelligenz-sicherheitsrisiko-durch-halluzinieren Article by: Norbert Regitnig-Tillian