Uncategorized

Learning and Large Language Models: Charting ChatGPT’s changes to curriculum


Zoe Gonzales / Daily Nexus

ChatGPT, OpenAI’s artificial intelligence language model, recently marked one year since its public release on Nov. 30, 2022 and has emerged as a transformative force in both technology and academia. At UC Santa Barbara, this influence is particularly evident, with ChatGPT playing an increasingly central role in redefining some course structures and teaching strategies. 

ChatGPT has quickly established itself as a versatile and accessible tool in the realm of artificial intelligence. This large language model (LLM) can comprehend and generate human-like text, with its capabilities extending to answering questions, assisting with creative writing, coding and much more. 

Trained on a dataset called Common Crawl containing billions of web pages and text, ChatGPT uses text data, written by other people, to produce slightly unique answers based on the given query. For the academic community at UCSB, these features have opened new avenues for both learning and teaching.

Brandon Meggerson, a third-year physics major, said that he uses ChatGPT to help study. 

“I use ChatGPT about once per week,” he said. “As a physics major, ChatGPT has been very useful when looking up what a specific theorem or equation is used for, [since] it gives the equation itself along with its common applications and origins.”  

However, despite its impressive capabilities, ChatGPT is not infallible. 

“I have noticed that it’s not good at applying the formula to solve problems. … It would give a correct process for solving the problem but give an incorrect answer. ChatGPT essentially fails to solve any math or physics problems that require critical thinking to solve,” Meggerson said.

ChatGPT is also used by students from non-S.T.E.M. majors for writing purposes. 

Priscila Villegas, a third-year film and media studies major, stated that the majority of her assignments are writing-based and uses ChatGPT for inspiration.

“If I’m having like a brain fart … and I just can’t really think of ideas, I ask ChatGPT, and then it sprouts ideas [for the paper],” she said.

Some professors have even begun to embrace ChatGPT as a resource for their academic content. 

“[Professors] mostly talk about how to not use it, but I had a class [where] we had to use ChatGPT to write a paper … you either use ChatGPT or an AI picture generator and tell it to generate something and you analyze what it generated,” Villegas said. 

Katie Baillargeon, a professor in UCSB’s Writing Program who teaches both upper and lower division writing course and recently branched out into faculty writing consultations and workshops, said that she has begun to include ChatGPT in her assignments.

“I have integrated ChatGPT into some early-stage process assignments in a few of my courses as I figure we might as well use it purposefully; I always have a few students simply refuse to do those activities, though, so I give them alternatives.”

Increasing the percentage that participation or in-class assignments are worth of the final grade have also been an emerging trend for some classes in response to ChatGPT. 

Phoebe Choi, a third-year film and media studies major who is also taking history courses this quarter said that many of her classes are placing a value on in-class evaluation. 

“Most classes have implemented more in-class essays or in-class activities so that we can’t necessarily use online sources. We have more class discussions and then participation is [graded higher],” Choi said. 

Choi said that the increased focus on participation is a welcome change for her. 

“Because learning has become so virtual, we’re just so used to using the internet and online tools … In regards to having more participation, I think it’s … a higher quality learning,” she said.

As of the New Faculty Handbook 2022-2023, there is no mandated requirement for professors to determine a stance on ChatGPT, either integrating it or banning it. Because of the lack of guidelines for ChatGPT’s role in academia at this preliminary stage, professors have taken different approaches to addressing the new landscape.

Some professors have opted to change the structure of their syllabus to account for the use of ChatGPT. During Fall Quarter 2023, statistics and computer science professor Sharon Solis removed homework assignments from her PSTAT 8 (Transition to Data Science, Probability and Statistics) course after discussing ChatGPT in the academic landscape with her colleagues in the graduate seminar CMPSC 595J: Teaching with Large Language Models. 

As a result, the revised syllabus for PSTAT 8 placed more emphasis on the exams. The midterm exam and final exam made up 80% of the final grade with 20% being lecture and section participation activities.

“As we’re kind of having these discussions with grad students about all the pros and cons of something like ChatGPT or another large language model, it really came top of mind how prevalent it is, and maybe it makes homeworks invalid like not valuable anymore,” Solis said. “So teaching [PSTAT 8], I thought if homeworks are not valuable anymore, then let’s just not even deal with homeworks.”

However, the lack of homework yielded unforeseen consequences for Solis, who found it challenging to evaluate her students’ performance in the class.

“I’ve really come to kind of change my mind … because then, first hand experience, students in … PSTAT 8 didn’t get any feedback from us before the midterm exam,” Solis said. “I didn’t know personally how students were performing, how students were writing before the midterm exam. So that was really scary on my part for me as an instructor, and I’m sure for students to not get any type of formal feedback, in terms of grading, before an exam.”

Baillargeon and her colleagues face different circumstances as writing instructors. 

“We’ve had quarterly workshops and general updates on LLMs from the amazing Dan Frank, along with group discussions on how to best handle the use of ChatGPT, etc., in our courses. There’s a wide array of opinions about LLMs in the Writing Program. I think the idea is to equip each of us with information so we can choose how to navigate it in our courses in the ways that feel right to us,” she said.

“I’ve asked students to be honest and clear about if they’ve used any AI in their work by providing a statement that articulates what they did and how as well as noting anything they discarded as being unhelpful,” she states about her own personal response to ChatGPT. “Transparency is the best policy.” Additionally, due to the utilization of specifications grading, Baillargeon says she has not had to adjust her grading policies.

Although adapting to ChatGPT through trial and error can be difficult at times, Solis said she is determined to adjust to the existence of the new tool. 

“I’m really grateful to be part of the seminar because I think the more me as an educator, I can understand the landscape that my students are coming from the tools that are accessible to students, the better I’m able to kind of come to terms with that and then understand and then try to maybe acknowledge those tools exist,” Solis said.

Professors also have faced academic dishonesty issues with ChatGPT. The office of the Executive Vice Chancellor issued a memo titled “Guidance Regarding AI-Writing Assistance Technologies” on May 1, 2023 to Senate Faculty and Unit 18 Faculty (which include lecturers and supervisors of teacher education) that discussed UCSB’s stance on AI-writing assistance technologies and how to address the use of such technologies within the classroom.

The points include ethical use of AI technologies for plagiarism detection, organizing class structures to “avoid issues with student use of AI-assistive technologies” and how to incorporate LLMs into courses. The memo stated that plagiarism detection software is not supported by UCSB because of the lack of accuracy. Faculty are encouraged to design writing assignments to involve LLMs through prompt generating and comparison activities or “require personal reflection or creative thinking” to steer students away from unethical use of AI-assistive technologies. 

Baillargeon comments on these plagiarism detection services stating, “The thing is that any of those ‘detectors’ are bunk, as far as I know. I’ve had students tell me they were erroneously accused in other [non-writing] classes, and they were pretty distraught about the whole thing because how can you disprove it?”

In regards to unauthorized student use of LLMs, the memo stated that “the use of AI writing technologies falls within the purview of the Student Conduct Code and the Student Guide to Academic Integrity” and encourages faculty to report instances of such cases.

“The statement was drafted by Office of Teaching and Learning instructional consultants (who regularly work with faculty, graduate students, and undergraduate students on issues related to teaching and learning) in consultation with the Office of Student Conduct. It was drafted following two workshops with faculty, students, and staff to explore the underlying structures of implications of Large Language Models,” the UCSB Office of Teaching and Learning (OTL) said in a statement to the Nexus.

The UCSB Office of Student Conduct outlines categories on their website of academic dishonesty, including cheating, plagiarism, furnishing false information, unauthorized collaboration and misuse of course materials. ChatGPT is explicitly mentioned under the larger umbrella category of cheating, defined as the “unauthorized use of artificial-intelligence programs (e.g., ChatGPT) to complete course work.” 

However, ChatGPT is still in its early stages. 

UCSB’s AI policies will doubtless change going forward. AI and Large Language Models are becoming more proficient by the day,” the OTL said.

 “I think banning the use of LLMs will be hard since it is difficult to tell the difference between human and LLM response,” Ambuj Singh, computer science professor in charge of the graduate seminar CMPSC 595J: Teaching with Large Language Models, said.

Singh is optimistic about the usage of LLMs in university education in the future. 

The nature of the courses and the assignments will likely change but LLMs like ChatGPT will allow students to focus more on problem solving,” Singh said. 

According to Singh, professors, students and universities must be ready to adapt to the next generation’s academic landscape, especially due to rapidly emerging methods of using ChatGPT in educational contexts. 

“There will likely not be a single path. Every course may have a different learning outcome. We need to experiment,” Singh said.

Baillargeon reiterates the sentiment. 

“It’s an ever-evolving situation, so what I’m saying to you here may not apply six months from now … I hope the longer term impact is that students can use it to save time on lower-level cognitive work, which then would mean they can devote time and brain space to more complex cognitive tasks.”

Even with the increasing use of LLMs, the OTL asserted that “[LLMs] cannot critically analyze contexts and situations; they cannot connect peoples’ identities and commitments to core disciplinary concepts; they cannot build relationships, engage in group work or motivate agency for change. These are skills and dispositions that people bring.” 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *