Ҵýƽ

Skip to main content

If you generate it…

If you generate it…

Photos by Patrick Campbell, Kimberly Coffin (CritMedia, StComm’18), Nathan Thompson (Jour’24)

When Averie Dow tells the parents of prospective Ҵýƽ students that she’s studying information science, one of the first questions she typically gets is around generative artificial intelligence.

“How A.I. is used in the classroom is their biggest concern, because they don’t want to send their kids here to just have them use A.I. for everything,” said Dow, a senior and university tour guide. “They tend to be grateful when I tell them our faculty acknowledge A.I., and that they have policies around when and how to use it. Because you can’t let it do your work for you, but you also can’t pretend it doesn’t exist, or you’ll graduate into a workplace where you’re the only one who can’t use it.”

“The way we learn about (A.I.) helps makes it a smaller problem. I’m seeing the advantages that comes from using it responsibly and ethically.”

Averie Dow

Finding that balance has been especially important to a discipline like information science, which incorporates ideas from computer science, social science and the humanities to reimagine how technology can unlock possibilities and better work for people.

A.I. is nothing new to faculty in the information science department of the College of Communication, Media, Design and Information, but the proliferation of tools like ChatGPT, Gemini and Claude—and the scramble by businesses to search for cost-saving innovations—have meant constant curricular course corrections to keep pace with shifts in the market: In February, the University of Colorado system announced a $2 million licensing deal with OpenAI to bring ChatGPT to all students, staff and faculty.

But rather than focusing on particular tools, CMDI faculty teach students to think critically about the problem they’re trying to solve, as well as the benefits and limitations of the tools at their disposal.

Portrait of Robin Burke with the Flatirons in the background.

Robin Burke

“Not every problem needs the biggest hammer,” said Robin Burke, a professor who studies recommender systems and teaches an undergraduate course on applied machine learning.

In that course, “we do talk about deep learning technologies, but we spend a lot of time on other machine learning techniques, because it’s important to know that range of possibilities,” he said. “You only get that if you understand what’s going on under the hood.”

Students who are technically oriented said they appreciate the real-world use cases where they can see what using A.I. looks like at work. Kaeden Stander is pursuing a master’s in information science to go with the bachelor’s degree he’s on track to earn in December. Thinking about how to use A.I. tools in the college’s Digital Legacy Clinic has helped him with his entrepreneurial aspirations; he’s the founder of , a content generation platform for WordPress sites.

‘A.I. is in every aspect of the workplace now’

“You input your information and the A.I. learns from your brand,” Stander said. “Then it’s able to make recommendations, generate blogs, social media captions, podcasts and even help create detailed data visualizations.”

His real-world experience using A.I. has helped him appreciate how to use it in class. In courses he takes for his philosophy minor, Stander said, no A.I. use is permitted, “so I don’t use it, but it’s not realistic—A.I. is in every aspect of the workplace now.”

“Why gatekeep something and put people behind when they should be ahead coming out of college? It’s something the information science program does well.”

A professor at a laptop. He's surrounded by students working in a conference room.

Professor Jed Brubaker, center, of the Digital Legacy Clinic, which challenges students to help members of the community make plans for their digital estates. Photo by Patrick Campbell.

ܰ’s research on recommender systems aims to enhance the fairness of algorithms by removing the biases that systems may inherit, whether from engineers’ design decisions or from the data used to train them. Right now, he’s interested in how to give users more control over what content or products the algorithm serves up.

You might expect him to be an A.I. evangelist, but Burke is more measured about the likely impact these tools will have.

“The hype is absurd,” he said. “I want students to focus on the proven capabilities of these technologies, as opposed to the claims people make about them.”

In fact, the critical perspective information science faculty bring to A.I. is one of the reasons students appreciate the degree. Dow, a self-described theater kid and art lover, said she came to college “as an A.I. hater, almost”; when she was given an assignment to use ChatGPT as part of an assignment, she was the only person in her class who hadn’t used it before.

“A.I. honestly scares me a little bit, when you think about it as this huge behemoth,” Dow said. “But the way we learn about it—here’s this tool, here’s what it can do, what do we think is wrong with it, what does it do poorly—helps make it a smaller problem. I’m seeing the advantages that come from using it responsibly and ethically.”

The ethical challenges A.I. poses are an important dimension for faculty, as well. That’s especially true at a college like CMDI, which prepares professionals for success in journalism, advertising, design and other creative fields. Because large language models have been trained on reams of copyrighted creative work, there is understandable hesitancy to adopt these tools.

Headshot of Casey Fiesler

Casey Fiesler

It’s why Casey Fiesler, the William R. Payden endowed professor in information science, leaves room for the “conscientious objectors” in her teaching; her public scholarship—which includes TikTok videos and standup comedy, as well as traditional thought leadership—is deeply concerned with the ethical dimensions of these tools.

She’s piloting a course this spring, A.I. and Society, that challenges students to examine broader societal implications around jobs, creativity, education and environmental impact as they relate to A.I.

“I don’t want students to not take this class because they have an ethical objection to using A.I.,” Fiesler said. “I wanted to create space for students who are really excited about A.I., and should think critically about it, and for those who need to learn how it works even if they’re critical of it.”

A counter to moving fast, breaking things

Chris Carruth approaches such challenges and perspectives from his artwork, which he calls “a slow, contemplative resistance” where he uses technology to “interrupt, interrogate and agitate conventional, normalized systems.”

That work, he said, is intended to run counter to the tech industry’s mantra of moving fast and breaking things.

“I get where Mark Zuckerberg was coming from when he said that, but that attitude incurs an ethical debt, which is what we’re trying to avoid,” said Carruth, an assistant teaching professor.

Rather than lecture at his students, Carruth challenges them to learn about topics like automation, policing and surveillance, and digital labor, and bring researched ideas to class for open discussion and debate. In doing so, he hopes to cultivate a sense of empathy among his charges.

“Ethics in computer science and computer science education should not be a feature. It should be the foundation.”

Chris Carruth, assistant teaching professor, information science

“I’m not saying we need to hit some big red stop button—and you’d probably get fired if you’re at work and pushing not to use A.I. at all,” he said. “To understand how this might actually work in your career, you need to bring a voice not of dissent, but of empathy, of nuance. So, be able to say, let’s not stop, but let’s pause, let’s think about impact before we roll these things out.

“Ethics in computer science and computer science education should not be a feature,” he said. “It should be the foundation.”

For Bryan Semaan, associate professor and chair of the information science department, the need for ethics in this space is expressed through the critical perspectives he studies in his research, which focuses on the interplay of race, media and technology. The Center for Race, Media and Technology that he manages has welcomed speakers like Ruha Benjamin, of Princeton University, and Timnit Gebru, formerly of Google, to encourage more critical thinking around the development of large language models and A.I.

Bringing their own identity, thinking

Headshot of Bryan Semaan

Bryan Semaan

In his class on race and technology, Semaan asks his students to write an essay reflecting on the benefits and harms of particular technologies. But before they start writing, they feed that prompt into ChatGPT.

“It’s a chance to think critically about what the A.I. returns to them,” he said. “What it’s written tends to not reflect the experiences my students have had. So, it becomes a way for them to see that it’s just giving them something, but they need to make sure their identity and thinking are infused in it.”

Something that makes information science at CMDI unique, he said, is that instead of rolling out countless new courses—which could quickly become dated by the speed of change in A.I.—the department has sought to integrate these tools into each course it offers.

“You won’t see A.I. in every course name, but we bring A.I. to every conversation we’re having, whether that’s data visualization, user-centered design or machine learning,” Semaan said.

As the technology becomes more integrated into students’ lives, those conversations are going deeper and deeper into their coursework.

“When I taught information ethics and policy at the graduate level, we started with a week on A.I. Then it was two weeks on A.I.,” she said. “Now, there’s no weeks on A.I., because it’s everywhere in that class and every other one. In almost everything we teach, A.I. is relevant to the topic.”


Joe Arney covers research and general news for the college.