Lack of knowledge: Although GPT-3 can generate new papers, it does not have enough knowledge and judgment to judge whether the generated content is accurate, complete and correct, nor does it control the authenticity and professionalism of the written content.
First of all, we can use some articles we have written as training data to train the GPT model, and then use this model to produce a sense of reading. Secondly, we can regard the post-reading task as a text generation task, provide a reading material for the model, and then ask the model to generate a post-reading feeling that meets the specified requirements.
The duplicate checking rate of gpt writing papers is not high. From the test results, it can be found that the repetition rate of the first draft can basically be controlled below 30%, which is 20% for ordinary undergraduate colleges and 30% for relatively loose ones. As the first draft of the paper, it is completely acceptable.
GPT-GNN pre-trains the neural network by reconstructing/generating the structural information and node attribute information of the input graph.
Weight reduction refers to the act of modifying the original paper to make it different from other documents, so as to avoid plagiarism. This requires word-for-word detection and revision of the original paper, while GPT-5 can only generate new language expressions and lacks the ability to understand and revise the original paper.