Uncategorized

New generative AI guidelines aim to curb research misconduct


CHINA

China’s Ministry of Science and Technology last month published new guidelines on the use of generative artificial intelligence in scientific research, as part of its efforts to improve scientific integrity and reduce research misconduct. The new rules notably include a ban on the ‘direct’ use of generative AI tools when applying for research funding and approval.

Under the guidelines, generative AI can still be used in research, but any content or findings that use the technology must be clearly labelled as such.

The ministry noted in particular that the rapid development of technologies such as AI “may give rise to new problems in the handling of research data, the formation and attribution of research works, and the attribution of intellectual property rights”, noting that the rapid development of such technologies has prompted “profound changes in the nature of scientific research”.

The document, Guidelines for Responsible Research Conduct, issued by the ministry’s Supervision Department, was drawn up after extensive research and solicitation of opinions, according to the ministry. It is intended as a guide for individual researchers and research institutes, universities and medical institutions, assisting them to carry out scientific research “responsibly”, according to the document released on 21 December 2023.

It said the guidelines were based on a “broad consensus” in scientific circles, and will be updated and adjusted based on further technological developments. According to the ministry, the guidelines draw on “useful foreign experiences and reflect international practices”.

A supplement to generative AI law

The new guidelines, which are specifically geared to scientific research, expand on China’s existing generative AI law which came into effect last August.

The guidelines require researchers to clearly label AI-generated content in their work and to describe the generation process to ensure the accuracy of the content and to protect others’ intellectual property rights.

The document also said any AI-generated content and results must be labelled within the text “especially when it involves key content such as facts and opinions”.

It is not appropriate to cite content generated by AI from other authors as primary literature. “If it is necessary to cite such content, it should be clearly stated that it was generated by AI,” the guidelines say.

AI-generated content should also be identified in the footnotes, methodology section or appendices of research papers, along with explanations of how it was created and what software was used.

Any content marked as being generated by AI should not be treated as original literature, and if other authors want to cite this content “an explanation should be given”, the guidelines say.

References generated by AI cannot be used unless they are verified first, noting that overseas university libraries have warned that generative AI may fabricate references and quotes.

Science foundation standards

Also in December, the National Natural Science Foundation of China (NSFC), one of the country’s largest research funders, which falls under the science ministry, released a set of integrity standards that require experts involved in evaluating science funding programmes to first obtain NSFC permission if they plan to use generative AI in their work.

NSFC’s own guidelines to researchers on the use of generative AI tools closely follows the ministry document. But it adds some details such as those related to images used in research papers, where generative AI could be used.

For example, it says image processing needs to be carried out “with caution”.

“Image adjustments should not blur, eliminate or misrepresent all the information presented in the original image. They should not enhance, blur, move, remove or add,” it says.

Earlier this month Science, the United States-based top international journal, announced it would use the AI image analysis tool Proofig, which screens images for copying and other types of tampering, to detect modified images than can deliberately mislead readers in its journals.

Ethical reviews

In October 2023, China also published pilot rules requiring universities, research institutes, health care organisations and companies to conduct “ethical reviews” of certain parts of their research covering AI. The rules took effect on 1 December.

The rules cover various aspects of the research process, including topic selection and peer review, with ethics, safety and transparency the main considerations.

One of the main concerns is the use of AI-generated content, which cannot be listed as a co-author under the new rules. Some scientists have listed AI tools such as ChatGPT as co-authors – a practice many journals have already stopped.

A researcher in Guangzhou told University World News: “The effect of all of these guidelines for researchers will be the same as guidelines on research misconduct – good researchers will abide by them, but others who are not confident or who want to get ahead in the competition for research funding will use genAI tools regardless.

“Text generation in English is particularly useful for many young Chinese researchers as it saves them so much time. Many will definitely continue to use it even without attribution,” he noted.

The researcher pointed to previous ministry guidelines on research misconduct. “These guidelines are issued every few years, yet research misconduct continues and is reaching higher and higher levels. Adding new rules on genAI does not address the underlying problems of the research system in this country.”

The ministry, however, has said in its guidelines that they should be used to train young researchers.

Researchers say the Chinese government is particularly concerned about the country’s research reputation, with Chinese research suffering from high retraction rates in international academic journals.

Pre-pandemic retractions of Chinese papers were among the highest in the world, with misconduct rather than error accounting for a large proportion of the total.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *