Chemical language models don’t need to understand chemistry
A study by the b-it proves that transformer models used in chemistry learn only statistical correlations.
A study by the b-it proves that transformer models used in chemistry learn only statistical correlations.
New publication of b-it Professor Dr. Jürgen Bajorath in the journal "Cell Reports Physical Science" shows how science can benefit from AI and what scientist need to look out for.
New study conducted by Prof. Dr. Bajorath and Sanjana Srinivasan at b-it and the Lamarr-Institute at the University of Bonn show the potential of language models in finding new medications. The researchers have created a chemical language model comparable to ChatGPT to predict potential active ingredients with special properties. Following a training phase, the AI was able to exactly reproduce the chemical structures of compounds with known dual-target activity that may be particularly effective medications.
New study conducted by Prof. Dr. Bajorath and Sanjana Srinivasan at b-it and the Lamarr-Institute at the University of Bonn show the potential of language models in finding new medications. The researchers have created a chemical language model comparable to ChatGPT to predict potential active ingredients with special properties. Following a training phase, the AI was able to exactly reproduce the chemical structures of compounds with known dual-target activity that may be particularly effective medications.
Artificial intelligence (AI) is on the rise. Until now, AI applications generally have “black box” character: How AI arrives at its results remains hidden. Prof. Dr. Jürgen Bajorath, a cheminformatics scientist at b-it, and his team have developed a method that reveals how certain AI applications work in pharmaceutical research. The results are unexpected: the AI programs largely remembered known data and hardly learned specific chemical interactions when predicting drug potency. The results have now been published in "Nature Machine Intelligence".