research /atlas/ en ATLAS researchers converge at TEI’26 to showcase their work on tangible, embedded, and embodied interaction /atlas/atlas-researchers-converge-tei26-showcase-their-work-tangible-embedded-and-embodied <span>ATLAS researchers converge at TEI’26 to showcase their work on tangible, embedded, and embodied interaction</span> <span><span>Michael Kwolek</span></span> <span><time datetime="2026-03-09T09:06:26-06:00" title="Monday, March 9, 2026 - 09:06">Mon, 03/09/2026 - 09:06</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/2026-02/TEI%2026%20Conference.png?h=d4c4cd0a&amp;itok=Bbs5T1Mw" width="1200" height="800" alt="TEI 2026 Conference"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/855"> Feature News </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/396" hreflang="en">ACME</a> <a href="/atlas/taxonomy/term/1464" hreflang="en">brainmusic</a> <a href="/atlas/taxonomy/term/390" hreflang="en">do</a> <a href="/atlas/taxonomy/term/1463" hreflang="en">leslie</a> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> </div> <a href="/atlas/michael-kwolek">Michael Kwolek</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> <div><p dir="ltr"><span>Sound, vision, movement and touch—ATLAS researchers explore many different ways humans can interact with computers, collect and analyze data, and empower creative exploration.</span></p><p dir="ltr"><span>Nearly a dozen current and former ATLAS lab members will participate in&nbsp;</span><a href="https://tei.acm.org/2026/" rel="nofollow"><span>ACM TEI’26</span></a><span> in Chicago (March 8-11, 2026), the 20th annual conference presenting the latest results in tangible, embedded, and embodied interaction.</span></p><p dir="ltr"><span>This year’s conference theme is “Tide + Tied”. Organizers note, “By becoming a venue to bring multi-folded 'Tides' across diverse, interdisciplinary fields, the conference aims to bring researchers, designers, and artists with different backgrounds and interests together to be 'Tied,' weaving the future of the TEI community together.”</span></p><p dir="ltr"><span>ATLAS has been involved with the TEI conference since its early years, with Professor and ACME Lab director, Ellen Do, and ATLAS director Mark Gross both actively involved behind the scenes.&nbsp;</span></p><p dir="ltr"><span>Do, who is a co-author on three papers and three works-in-progress accepted at TEI ‘26, explains, “Each one of the projects is a documentation of how researchers think about ideas and how to implement them and get them to fruition.”&nbsp;</span></p><p dir="ltr"><span>She elaborates, “The conference is called Tangible, Embedded and Embodied Interaction, so a lot of work we're doing is beyond the screen. Things that we touch and put together.”</span></p><h3><span>Papers</span></h3><div class="row ucb-column-container"><div class="col ucb-column"><h4><a href="https://programs.sigchi.org/tei/2026/program/content/223849" rel="nofollow"><span><strong>Sound of Kigumi: A Playful VR Joinery Adjustment with Hammering Sound Feedback</strong></span></a></h4><p dir="ltr"><span>Kosei Ueda,&nbsp;</span><a href="/atlas/ellen-yi-luen-do" rel="nofollow"><span><strong>Ellen Yi-Luen Do</strong></span></a><span>, Hironori Yoshida&nbsp;</span></p><p dir="ltr"><span><strong>Abstract</strong>: Traditional carpentry faces a critical shortage of skilled workers due to limited opportunities for potential apprentices to access onsite woodworking experience. Through expert interviews, we learned the importance of hammering sound to judge the precision in Kigumi assembly, as master carpenters rely on differences between “soft sound" and “sharp sound" without relying on visuals. This paper presents Sound of Kigumi (SoK), a playful VR system for inexperienced users to casually experience sound sensory skills through the loop of hammering and chiseling. In SoK, users listen to hammering sound in relation to tightness, assess the precision of their work, and return to chiseling for further adjustments. Furthermore, SoK implements pseudo-haptic feedback by visually modifying hammering resistance based on chiseling progress. Expert evaluation indicated SoK replicates the hammering process and serves as an effective introductory tool, and user feedback confirmed SoK provides an immersive woodworking experience and effective Kigumi learning.</span></p></div><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2026-03/Sound%20of%20Kigumi.png?itok=hzG0hFYY" width="1500" height="903" alt="Sound of Kigumi processing technique"> </div> <span class="media-image-caption"> <p><em>The first prototype: The user observes and hammers two types of Kigumi - one correctly processed without visible gaps when hammered and one incorrectly processed that reveals gaps upon hammering.</em></p> </span> </div></div><div class="row ucb-column-container"><div class="col ucb-column"><h4><a href="https://programs.sigchi.org/tei/2026/program/content/223840" rel="nofollow"><span><strong>Why (Not) ReacTIVision: Emerging Challenges and Opportunities for Building Tangible User Interfaces with Computer Vision Toolkits</strong></span></a></h4><p dir="ltr"><a href="/atlas/krithik-ranjan" rel="nofollow"><span><strong>Krithik Ranjan</strong></span></a><span>, S. Sandra Bae, Peter Gyory,&nbsp;</span><a href="/atlas/ellen-yi-luen-do" rel="nofollow"><span><strong>Ellen Yi-Luen Do</strong></span></a><span>, Clement Zheng, Rong-Hao Liang</span></p><p dir="ltr"><span><strong>Abstract</strong>: Outdated Computer Vision (CV) toolkits for Tangible User Interfaces (TUI) have led to fragmented practices, diminished reproducibility, and reduced community support. This paper examines the past, present, and future trajectory of CV-TUI toolkits. First, our scoping review of ACM literature reveals a divergence between applications using the limited interactions of established toolkits like ReacTIVision and the fragmented, bespoke systems built for complex interactions, highlighting the need for advanced toolkits that enable accessible making. Second, we present proof-of-concept applications using the contemporary ArUco fiducial marker library. We demonstrate how accessible hardware, like a top-down camera and a flat-panel display, can support a comprehensive design space of tangible interactions beyond 2D manipulation, including 3D spatial interaction, multi-device interaction, and actuated tangibles within canonical applications. Finally, reflecting on our findings, we offer six suggestions for building next-generation CV-TUI toolkits. This study provides the TUI community with an updated perspective to inform future research.</span></p></div><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2026-03/ReacTIVision_0.png?itok=eSliRMj7" width="1500" height="431" alt="Sensing touch input on tokens with a capacitive touchscreen"> </div> <span class="media-image-caption"> <p><em>Sensing touch input on tokens with a capacitive touchscreen: a) Each ArUco-marked knob is augmented with a vinyl-cut copper sheet pattern. b-c) The knob transfers finger-touch inputs to the touchscreen when users interact with the token.</em></p> </span> </div></div><div class="row ucb-column-container"><div class="col ucb-column"><h4><a href="https://programs.sigchi.org/tei/2026/program/content/223837" rel="nofollow"><span><strong>HyperDance: Real-Time Vibrotactile Stimulation Feedback of Inter-Brain Connectivity in Partner Dance</strong></span></a></h4><p dir="ltr"><a href="/atlas/thiago-roque" rel="nofollow"><span><strong>Thiago Rossi Roque</strong></span></a><span>, Ruojia Sun,&nbsp;</span><a href="/atlas/ellen-yi-luen-do" rel="nofollow"><span><strong>Ellen Yi-Luen Do</strong></span></a><span>,&nbsp;</span><a href="/atlas/grace-leslie" rel="nofollow"><span><strong>Grace Leslie</strong></span></a><span>&nbsp;</span></p><p dir="ltr"><span><strong>Abstract</strong>: Building on the growing interest in technology-supported dance practice, neural imaging offers novel opportunities to reveal dancers’ internal states and expand the possibilities for augmented, embodied interaction. Despite advances in social neuroscience, the exploration of dance through brain imaging remains limited by technical challenges. To overcome these barriers, we developed and validated a real-time vibrotactile biofeedback system based on inter-brain coupling (IBC) measures from tango dancers using a mobile, synchronous multi-brain EEG system. We first conducted an empirical study recording synchronized EEG and motion data to test whether behavioral synchronization enhances inter-brain coupling. Insights from this study informed the design of our tangible neurofeedback system, which experienced dancers evaluated. Our findings support the Synchronicity Hypothesis of Dance and demonstrate how embodied technologies can enhance collective dance practice. This work introduces a novel methodological and interaction paradigm, bridging neural measurement with wearable feedback for socially situated embodied experiences.</span></p></div><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2026-03/HyperDance.png?itok=TyomgJTX" width="1500" height="880" alt="Two dances wearing EEG caps"> </div> <span class="media-image-caption"> <p><em>HyperDance enables real-time measurement and tactile feedback of inter-brain coupling during natural partner dance practice.</em></p> </span> </div></div><hr><h4>Art and Performance</h4><div class="row ucb-column-container"><div class="col ucb-column"><h4><a href="https://programs.sigchi.org/tei/2026/program/content/224275" rel="nofollow"><span>Bioactuated Tapestry: Converging Textile Craft and Moisture-Responsive Biomaterials</span></a></h4><p><a href="/atlas/eldy-lazaro" rel="nofollow"><span><strong>Eldy S. Lazaro Vasquez</strong></span></a><span>;&nbsp;</span><a href="/atlas/viola-arduini" rel="nofollow"><span><strong>Viola Arduini</strong></span></a><span>;&nbsp;</span><a href="/atlas/etta-sandry" rel="nofollow"><span><strong>Etta W Sandry</strong></span></a><span>;&nbsp;</span><a href="/atlas/katerina-houser" rel="nofollow"><span><strong>Katerina Houser</strong></span></a><span>;&nbsp;</span><a href="/atlas/srujana-golla" rel="nofollow"><span><strong>Srujana Golla</strong></span></a><span>;&nbsp;</span><a href="/atlas/mirela-alistar" rel="nofollow"><span><strong>Mirela Alistar</strong></span></a></p><p><span><strong>Abstract</strong>: Bioactuated Tapestry is an installation that explores how biomaterials and textile craft unfold multiple temporalities of interaction. Structured in three zones, the installation moves from milk-based bioplastic samples that change shape quickly when misted, to a Sample Book that documents iterations of bioplastic integration into weaving, to a woven tapestry that changes shape slowly in response to humidity in the surrounding space. Together, these zones demonstrate how interaction can emerge from material behavior shaped through biomaterial formulation and, when woven, through structure. The work foregrounds biomaterial agency, weaving, and situated sustainability grounded in sourcing, fabrication, and practices of care. Through this convergence of biodesign and textile craft, Bioactuated Tapestry aligns with the TEI theme of Resurgence and Convergence, highlighting how material-led practices reconnect material experimentation, environmental attunement, and embodied ways of knowing.</span></p><p><a class="ucb-link-button ucb-link-button-blue ucb-link-button-default ucb-link-button-regular" href="https://www.eldylazaro.com/?portfolio=bioactuated-tapestry" rel="nofollow"><span class="ucb-link-button-contents">Learn More</span></a></p></div><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2026-03/Bioactuated%20Tapestry%202.jpg?itok=CD47wz59" width="1500" height="1125" alt="Detail of bioactuated textile"> </div> <span class="media-image-caption"> <p><em>Detail of Bioactuated Tapestry, showing colored casein-based bioplastic strips woven through black cotton yarns. Moisture causes the bioplastic to change shape, and the weave directs that change into curling.</em></p> </span> </div></div><hr><h3>Pictorials</h3><div class="row ucb-column-container"><div class="col ucb-column"><h4><a href="https://programs.sigchi.org/tei/2026/program/content/224206" rel="nofollow"><span>Designing for the Leaky Body: Exploring Biomaterial Absorption as Body-Material Interaction</span></a></h4><p dir="ltr"><a href="/atlas/viola-arduini" rel="nofollow"><span><strong>Viola Arduini</strong></span></a><span>;&nbsp;</span><a href="/atlas/eldy-lazaro" rel="nofollow"><span><strong>Eldy S. Lazaro Vasquez</strong></span></a><span>;&nbsp;</span><a href="/atlas/srujana-golla" rel="nofollow"><span><strong>Srujana Golla</strong></span></a><span>;&nbsp;</span><a href="/atlas/mirela-alistar" rel="nofollow"><span><strong>Mirela Alistar</strong></span></a></p><p><span>Abstract: Leaking bodies are often concealed or disregarded in both society and design. Likewise, bodily fluids are rarely leveraged as triggers for material interaction in HCI. In this pictorial, we investigate how fluid-responsive biomaterials can enable porous, expressive, and cyclical interactions co-shaped by the body. We focus on a milk-derived bioplastic with reversible shape-changing properties, examining fluid absorption as a meaningful design affordance. Our material-led approach contributes both formulation and fabrication methods of casein bioplastic; while autoethnographic inquiry with a lactating body informed the development of Leaky Body Maps and speculative garments that position leakage as a generative site of body-material interaction. This work contributes to the discourse of feminist and posthuman HCI by centering bodily permeability, material responsiveness, and the potential of designing with – rather than concealing – leaky bodies.</span></p></div><div class="col ucb-column"> <div class="imageMediaStyle small_500px_25_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/small_500px_25_display_size_/public/2026-03/Leaky%20Body.jpg?itok=VSsJmDqY" width="375" height="610" alt="Leaky body garment prototype"> </div> <span class="media-image-caption"> <p><em>Garment prototype showing where casein-based bioplastic was placed based on a body leak map of possible milk leakage.</em></p> </span> </div></div><hr><h3><span>Works In Progress</span></h3><div class="row ucb-column-container"><div class="col ucb-column"><h4><a href="https://programs.sigchi.org/tei/2026/program/content/224242" rel="nofollow"><span><strong>ArUcoTUI: Software Toolkit for Prototyping Tangible Interactions on Portable Flat-Panel Displays with OpenCV</strong></span></a></h4><p dir="ltr"><span>Rong-Hao Liang, Steven Houben,&nbsp;</span><a href="/atlas/krithik-ranjan" rel="nofollow"><span><strong>Krithik Ranjan</strong></span></a><span>, S. Sandra Bae, Peter Gyory,&nbsp;</span><a href="/atlas/ellen-yi-luen-do" rel="nofollow"><span><strong>Ellen Yi-Luen Do</strong></span></a><span>, Clement Zheng</span></p><p dir="ltr"><span><strong>Abstract</strong>: Tangible User Interfaces (TUIs) that integrate digital information with physical interaction require specialized hardware and complex calibration, limiting their adoption in portable or mobile display systems. This paper introduces ArUcoTUI, a computer vision (CV) toolkit for prototyping tangible interactions on portable screens, leveraging standard cameras and the OpenCV library. ArUcoTUI uses ArUco fiducial markers to detect physical inputs. The software toolkit offers streamlined calibration, a signal processing pipeline, and a client application that translates tangible input into structured events for use in HCI applications. Using a conventional camera in a top-down setting with a flat-panel display, we demonstrate how this toolkit supports the development of interactive surface TUIs with advanced features, including 3D spatial interaction, multi-device interaction, and actuated tangibles within applications. We describe the software implementation, which utilizes accessible hardware to support the development of these tangible interactions. We provide the results of a preliminary evaluation with users, including design implications and suggestions for future research and development.</span></p></div><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2026-03/ArUCoTUI.png?itok=jTHxkdnf" width="1500" height="287" alt="ArUcoTUI is a software toolkit for rapid TUI prototyping on portable screens"> </div> <span class="media-image-caption"> <p>ArUcoTUI is a software toolkit for rapid TUI prototyping on portable screens. It uses standard cameras, OpenCV, and ArUco markers for real-time object tracking. We demonstrate the applicability using an overhead camera for a) multi-token music control, b) above-screen gesture detection, c) multi-display board games, and d) actuated data visualization using robots.</p> </span> </div></div><div class="row ucb-column-container"><div class="col ucb-column"><h4><a href="https://programs.sigchi.org/tei/2026/program/content/224189" rel="nofollow"><span><strong>Rig-a-Doodle: Tangible Kit for Dynamic Hand-drawn Character Animation</strong></span></a></h4><p dir="ltr"><a href="/atlas/krithik-ranjan" rel="nofollow"><span><strong>Krithik Ranjan</strong></span></a><span>, Khushbu Kshirsagar, Harrison Jesse Smith,&nbsp;</span><a href="/atlas/ellen-yi-luen-d" rel="nofollow"><span><strong>Ellen Yi-Luen Do</strong></span></a></p><p dir="ltr"><span><strong>Abstract</strong>: Character animation remains challenging for novices and children despite advances in digital tools. While recent tangible interfaces have lowered barriers by enabling creators to animate their drawings on paper, they are limited to preset animation sequences and support for only human-like characters. We present Rig-a-Doodle, a tangible kit and web application for fully open-ended character rigging animation, where creators can draw any character and construct a custom physical rig using everyday materials to animate it. This work-in-progress contributes a system of tangible interaction to animate hand-drawn characters by direct physical manipulation of custom rigs in real-time. We share findings from a preliminary workshop with adults to explore the kinds of expressive animation the kit enables, discover issues with interaction, and source ideas for future directions.</span></p></div><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2026-03/Rig-a-Doodle.png?itok=uGx3UhhN" width="1500" height="1212" alt="Rig-a-Doodle character template"> </div> <span class="media-image-caption"> <p>(Top-left) Rig-a-Doodle character template to draw the character and cut out CV markers for the rig. (Top-right, bottom-left, bottom-right) The three steps of Capture, Assign, and Play illustrated with screenshots from the Rig-a-Doodle application.</p> </span> </div></div><div class="row ucb-column-container"><div class="col ucb-column"><h4><a href="https://programs.sigchi.org/tei/2026/program/content/224167" rel="nofollow"><span><strong>Lighting the Reef: Modular Paper Circuits as Ecological Metaphor</strong></span></a></h4><p dir="ltr"><a href="/atlas/ruhan-yang" rel="nofollow"><span><strong>Ruhan Yang</strong></span></a><span>,&nbsp;</span><a href="/atlas/yuchen-zhang" rel="nofollow"><span><strong>Yuchen Zhang</strong></span></a><span>,&nbsp;</span><a href="/atlas/ellen-yi-luen-do" rel="nofollow"><span><strong>Ellen Yi-Luen Do</strong></span></a></p><p dir="ltr"><span><strong>Abstract</strong>: We present Lighting the Reef, an interactive installation that uses modular 3D paper circuits to explore ecological fragility. Participants build coral structures from foldable paper blocks with copper tape and low-voltage components. When connections align, coral modules glow, metaphorically expressing the energy exchange between coral and zooxanthellae, the symbiotic algae crucial to coral metabolism. Pollution modules add resistance that dims light or interrupts the current entirely, mirroring environmental disruption. We position Lighting the Reef as a Research through Design case that articulates fragility as an interaction aesthetic and ecological metaphor. We reflect on how modular circuitry, material constraints, and embodied play make precarity tangible. We also report workshops with 15 participants that discussed themes of care, collapse, and interdependence. We contribute insights into designing for fragility with modular circuits, ecological storytelling through tangible interaction, and accessible and reproducible designs for participatory sustainability education.</span></p></div><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2026-03/Lighting%20the%20Reef.png?itok=kmLGkp1V" width="1500" height="1073" alt="Lighting the Reef installation built from 3D paper circuit modules"> </div> <span class="media-image-caption"> <p><em>Lighting the Reef is a tangible installation built from 3D paper circuit modules, whose illumination depends on alignment and balance. As participants assemble and adjust the blocks, the lights brighten, dim, or turn off, reflecting the changing conditions of the coral system.</em></p> </span> </div></div><p dir="ltr">&nbsp;</p></div> </div> </div> </div> </div> <div>Members of several ATLAS labs show off the latest research on human-computer interactivity.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Mon, 09 Mar 2026 15:06:26 +0000 Michael Kwolek 5177 at /atlas Using AI ethically: 6 tips for bringing AI tools into learning and work /atlas/using-ai-ethically-6-tips-incorporating-chatgpt-and-other-tools-how-we-learn-and-work <span>Using AI ethically: 6 tips for bringing AI tools into learning and work</span> <span><span>Michael Kwolek</span></span> <span><time datetime="2026-03-03T10:14:21-07:00" title="Tuesday, March 3, 2026 - 10:14">Tue, 03/03/2026 - 10:14</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/2026-02/AI%20Ethics%20Adobe%20Stock.jpeg?h=82f92a78&amp;itok=ZG4IBr1R" width="1200" height="800" alt="woman sits at laptop studying"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/855"> Feature News </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> <div><p dir="ltr"><span>How do you AI? Environmental, privacy, political, and intellectual property issues aside (and those are major issues), there are many ethics considerations involved in how we approach our own day-to-day use of AI tools.</span></p> <div class="align-right image_style-small_500px_25_display_size_"> <div class="imageMediaStyle small_500px_25_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/small_500px_25_display_size_/public/2026-02/AI%20Ethics%203.jpeg?itok=ZHQG-qR_" width="375" height="248" alt="Nikolaus Klassen teaches AI ethics"> </div> </div> <p dir="ltr"><span>We interviewed </span><a href="/atlas/nikolaus-klassen" rel="nofollow"><span>Nikolaus Klassen</span></a><span>, business analyst at Google and ATLAS lecturer, on the topic of AI ethics. You can read highlights from the conversation in our article, </span><a href="/atlas/exploring-ethics-ai-can-we-use-chatgpt-and-other-tools-consciously" rel="nofollow"><span>Exploring the ethics of AI: Can we use tools like ChatGPT consciously?</span></a><span>&nbsp;</span></p><p dir="ltr"><span>During our interview, Klassen also proposed tips we can use when considering how to incorporate AI tools into our work. Take a look:</span></p><h3><span>Consider hidden taxonomies</span></h3><p dir="ltr"><span>All the processed information we use in AI tools is organized through taxonomies—systems for naming, labeling and cataloging datapoints. The question that we should ask ourselves when we get an AI output is: What are the hidden taxonomies and assumptions this is built on?&nbsp;</span></p><p dir="ltr"><span>For instance,&nbsp;</span><a href="https://medium.com/@todasco/what-ai-thinks-ceos-look-like-bd8831013370" rel="nofollow"><span>a researcher found that</span></a><span> AI tools would respond to the prompt “picture of a CEO” with outputs of a (usually white) man in a suit. Nobody directly trained the models to do that explicitly, nor was it requested by the user—this is an example of a hidden taxonomy.&nbsp;</span></p><p dir="ltr"><span>That may be an obvious example of bias baked into the system over time, but there are as-yet undiscovered taxonomies in the data AI tools draw on.&nbsp;</span></p><h3><span>Note the law of the instrument&nbsp;</span></h3><p dir="ltr"><span>Abraham Maslow said, “If the only tool you have is a hammer, you tend to see every problem as a nail.” Ask yourself: Am I distorting reality to fit my tool?&nbsp;</span></p><p dir="ltr"><span>The example Klassen uses in class is predictive policing, which distorts reality by applying an algorithm fed on historical data that may be incomplete, biased, and a poor analog for the present reality. This would be a use case requiring us to distort reality to make the tool work where it does not fit the problem.</span></p><h3><span>Do a reality check</span></h3><p dir="ltr"><span>Consider what would happen if you were to act on the advice an AI tool outputs. Does this recommendation actually fit this situation? Consider why the AI tool is offering a particular choice in a specific way. This will become increasingly important as AI companies incorporate advertising into their platforms.&nbsp;</span></p><p dir="ltr"><span>Is the tool framing the choice appropriately or could there be a more ethically sound way to frame this choice?</span></p><h3><span>Hone your judgment skills</span></h3><p dir="ltr"><span>Before AI tools became ubiquitous, students and junior workers typically turned what they learned into artifacts—they would write a software function, develop a mathematical proof, draft an essay or sketch out a design. Such artifacts were the output of the mental work they did.&nbsp;</span></p><p dir="ltr"><span>Now that AI can easily create artifacts, such outputs can no longer be considered the endpoint of mental work. When artifacts are cheap, judgment becomes more valuable.&nbsp;</span></p><p dir="ltr"><span>If we do not have to build research reports, analyses, recommendations or even creative designs ourselves—as junior workers often did in many fields—we risk losing an entire infrastructure designed to train the next generation of leaders to have refined judgment and discernment skills.</span></p><p dir="ltr"><span>We must be diligent in learning to judge artifacts made by AI and determining how to iterate on and improve them.</span></p><div class="ucb-box ucb-box-title-left ucb-box-alignment-right ucb-box-style-fill ucb-box-theme-lightgray"><div class="ucb-box-inner"><div class="ucb-box-title"><span>Key ethics concepts</span></div><div class="ucb-box-content"><ul><li dir="ltr"><span><strong>Choice architecture</strong> - A deliberate design of a tool or environment that influences how people make decisions without directly restricting choice.</span></li><li dir="ltr"><span><strong>Deontology</strong> - The theory that there are absolute moral obligations that must be followed regardless of consequences, exceptions, or potential benefits.</span></li><li dir="ltr"><span><strong>Law of the instrument</strong> - A cognitive bias toward over-reliance on a familiar tool for solving problems, regardless of suitability.</span></li><li dir="ltr"><span><strong>Moral licensing</strong> - A phenomenon in which people justify an immoral action after having previously done something good.</span></li><li dir="ltr"><span><strong>Utilitarianism</strong> - The theory that the most moral action is the one that maximizes good and minimizes suffering for the greatest number of people.</span></li></ul></div></div></div><h3><span>Watch for “workslop”</span></h3><p dir="ltr"><span>Harvard Business Review recently published an article contending that&nbsp;</span><a href="https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity" rel="nofollow"><span>AI-generated “workslop” is destroying productivity</span></a><span>. Consider overly wordy reports with hidden errors, customer service chatbots that lead users to dead ends, drab creative copy, business recommendations that presenters cannot defend. AI can make people slower when they have to wade through slop—and they often return the favor with AI-generated responses of their own.&nbsp;</span></p><p dir="ltr"><span>AI tools can feel like fast food, giving us something quick and easy that may not be very nutritious. Yet we are given no "nutrition facts"—we do not know where the output is incorrect or shows bias or fails to give the full story.</span></p><p dir="ltr"><span>Like diet and exercise, it takes conscious effort to maintain a healthy relationship with AI tools. If you want to stay mentally engaged, you have to do the equivalent of going to the gym and working out.&nbsp;</span></p><h3><span><strong>Beware the dopamine peak</strong></span></h3><p dir="ltr"><span>When making something yourself from scratch, your work builds up to the moment of completion—this creates a dopamine peak, a temporary surge in the "feel-good" hormone.&nbsp;</span></p><p dir="ltr"><span>But when AI can bring you immediately to that peak with little effort, afterwards it plunges just as quickly. You may lose motivation and not necessarily intellectualize what you just completed.&nbsp;</span></p><p dir="ltr"><span>We will do well to learn to use AI tools as a means of continued development toward mastery at craft rather than simply as time savers.</span></p></div> </div> </div> </div> </div> <div>AI tools are everywhere. We offer several ethical considerations for how and when to use them.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Tue, 03 Mar 2026 17:14:21 +0000 Michael Kwolek 5176 at /atlas Inaugural Sustainability Research Initiative Research Fellows unveiled /atlas/2026/02/23/inaugural-sustainability-research-initiative-research-fellows-unveiled <span>Inaugural Sustainability Research Initiative Research Fellows unveiled</span> <span><span>Michael Kwolek</span></span> <span><time datetime="2026-02-23T13:20:47-07:00" title="Monday, February 23, 2026 - 13:20">Mon, 02/23/2026 - 13:20</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/2026-02/Sustainability%20Research%20Initiative.png?h=5d0f0d5c&amp;itok=MaNEaISR" width="1200" height="800" alt="Image of globe overlayed on sustainability icons and an image of rolling hills"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/855"> Feature News </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/729" hreflang="en">alistar</a> <a href="/atlas/taxonomy/term/731" hreflang="en">living matter</a> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> </div> </div> </div> </div> <div>Assistant professor Mirela Alistar is a member of the first cohort of SRI Research Fellows selected by Ҵýƽ's Research &amp; Innovation Office. </div> <script> window.location.href = `/researchinnovation/2026/02/23/inaugural-sustainability-research-initiative-research-fellows-unveiled`; </script> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Mon, 23 Feb 2026 20:20:47 +0000 Michael Kwolek 5175 at /atlas 2026 RIO Faculty Fellows cohort spans departments and disciplines across campus /atlas/2026-rio-faculty-fellows-cohort-spans-departments-and-disciplines-across-campus <span>2026 RIO Faculty Fellows cohort spans departments and disciplines across campus</span> <span><span>Michael Kwolek</span></span> <span><time datetime="2025-12-08T15:57:57-07:00" title="Monday, December 8, 2025 - 15:57">Mon, 12/08/2025 - 15:57</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/2025-12/Rivera%20Utility%20Research.jpg?h=82f92a78&amp;itok=ihaFlGK-" width="1200" height="800" alt="Michael Rivera"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/855"> Feature News </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> <a href="/atlas/taxonomy/term/1511" hreflang="en">rivera</a> <a href="/atlas/taxonomy/term/1510" hreflang="en">utility</a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> </div> </div> </div> </div> <div>Assistant professor Michael Rivera is one of 18 joining the 2026 RIO Faculty Fellow cohort.</div> <script> window.location.href = `/researchinnovation/2025/12/08/2026-rio-faculty-fellows-cohort-spans-departments-and-disciplines-across-campus`; </script> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Mon, 08 Dec 2025 22:57:57 +0000 Michael Kwolek 5157 at /atlas Balfour’s Memory Care patients treated to soothing Longmont Symphony experience /atlas/2025/12/05/balfours-memory-care-patients-treated-soothing-longmont-symphony-experience <span>Balfour’s Memory Care patients treated to soothing Longmont Symphony experience</span> <span><span>Michael Kwolek</span></span> <span><time datetime="2025-12-05T11:54:54-07:00" title="Friday, December 5, 2025 - 11:54">Fri, 12/05/2025 - 11:54</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/2025-12/Orchestra.png?h=0e753701&amp;itok=RPullYNb" width="1200" height="800" alt="Conductor leading orchestra"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/855"> Feature News </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/1464" hreflang="en">brainmusic</a> <a href="/atlas/taxonomy/term/1463" hreflang="en">leslie</a> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> </div> </div> </div> </div> <div>Grace Leslie noted that "for people living with Alzheimer’s or dementia, both anecdotal and experimental evidence point to the durability of music in the brain."</div> <script> window.location.href = `https://www.dailycamera.com/2025/12/04/longmont-symphony-balfour-partnership/`; </script> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Fri, 05 Dec 2025 18:54:54 +0000 Michael Kwolek 5156 at /atlas DNA origami: unfolding genetic breakthroughs /atlas/dna-origami-unfolding-genetic-breakthroughs <span>DNA origami: unfolding genetic breakthroughs</span> <span><span>Michael Kwolek</span></span> <span><time datetime="2025-11-18T10:39:32-07:00" title="Tuesday, November 18, 2025 - 10:39">Tue, 11/18/2025 - 10:39</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/2025-11/Alistar%20Living%20Matter%20Lab.JPG?h=82f92a78&amp;itok=6zqdeUqP" width="1200" height="800" alt="Mirela Alistar in lab coat with equipment"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/855"> Feature News </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/729" hreflang="en">alistar</a> <a href="/atlas/taxonomy/term/731" hreflang="en">living matter</a> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> </div> <a href="/atlas/michael-kwolek">Michael Kwolek</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> <div><div class="ucb-box ucb-box-title-left ucb-box-alignment-right ucb-box-style-fill ucb-box-theme-lightgray"><div class="ucb-box-inner"><div class="ucb-box-title"><span>Lab Venture Challenge</span></div><div class="ucb-box-content"><p><span>Johnson and Alistar competed as finalists in Ҵýƽ 2025&nbsp;</span><a href="/venturepartners/opportunities-and-events/lab-venture-challenge#finalists" rel="nofollow"><span>Lab Venture Challenge</span></a><span> where their technology generated much interest from industry leaders.</span></p></div></div></div><p dir="ltr"><span>Access to DNA is crucial in many branches of biomedical research. But making long strands of DNA is time consuming, error-prone and expensive.</span></p><p dir="ltr"><span>Over the years, researchers have worked to make DNA synthesis more efficient, with&nbsp;</span><a href="/asmagazine/2023/06/28/cu-boulders-marvin-caruthers-wins-inaugural-merkin-prize-biomedical-technology-developing" rel="nofollow"><span>important contributions made by Marvin Caruthers</span></a><span>, distinguished professor of chemistry and biochemistry at the University of Colorado Boulder. This research has advanced a range of biomedical fields including drug and vaccine development, pathogen tests, and cancer diagnostics.&nbsp;</span></p><p dir="ltr"><span>Making DNA involves complex biochemical and mechanical processes to assemble a strand base by base. At each stage, there is a small chance of failure, but in doing this process over and over for more bases, that chance increases.&nbsp;</span></p><p dir="ltr"><span>The process of creating a DNA strand longer than 1,000 bases often takes several weeks, which can hinder research cycles. To solve this, biotech companies have pursued incremental efficiency gains in strand construction.</span></p><p dir="ltr"><span>Now researchers in the ATLAS Institute’s&nbsp;</span><a href="/atlas/living-matter-lab" rel="nofollow"><span>Living Matter Lab</span></a><span> aim to rethink DNA synthesis altogether.</span></p><p dir="ltr"><span><strong>A new way to build DNA</strong></span></p><p dir="ltr"><span>Lab director and assistant professor&nbsp;</span><a href="/atlas/mirela-alistar" rel="nofollow"><span>Mirela Alistar</span></a><span> and post-doctoral researcher&nbsp;</span><a href="/atlas/joshua-johnson" rel="nofollow"><span>Joshua Johnson</span></a><span> are working to develop nanorobots that will more quickly and accurately build DNA to meet researchers’ specifications in a matter of days instead of weeks.&nbsp;</span></p><p dir="ltr"><span>They are employing DNA origami—a creative technique for shaping these building blocks of life—to create a nanorobot to speed the process of making new DNA strands. “It folds much like paper origami, but it is made of DNA,” Johnson noted. “Our particular nanorobot is rectangular with a rotating arm element. It is about 2,000 times smaller than the width of human hair.”</span></p><p dir="ltr"><span>DNA origami research dates back to 2006, with scientists making simple but precise nanoscale shapes and patterns. Alistar and Johnson aim to apply this technique to the mechanical arrangement of molecules. “We are taking existing scientific concepts and combining them in new ways—much like engineering a normal sized robot but at the molecular scale," Johnson elaborated.</span></p><p dir="ltr"><span>Alistar explained the team’s contribution to DNA origami research as “designing the DNA structure that becomes a robot such that it is more stable, translating the fabrication process from extremely highly advanced labs to a little bit of a lower-key lab in computer science, which means we have to be inventive with a lot of the processes.”</span></p><div class="row ucb-column-container"><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-11/Johnson%20Living%20Matter%20Lab%203.JPG?itok=xzXitK2r" width="1500" height="1001" alt="Joshua Johnson in a lab coat holding a small container of DNA"> </div> </div><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-11/Alistar%20Living%20Matter%20Lab%205.JPG?itok=r6605LnM" width="1500" height="1001" alt="Mirela Alistar working with a machine emitting UV light"> </div> </div></div><p dir="ltr"><br><span><strong>The right place for the research</strong></span></p><p dir="ltr"><span>The ATLAS Institute’s relationships with the College of Engineering and Applied Science create space for such breakthrough research. “We are interdisciplinary—I'm confident saying that,” Alistar said. “We do work with DNA for bacteriophages. We also work with microfluidics, which is also needed for the DNA nanorobot. So there are a lot of intersections in which we saw the potential for developing a DNA origami-based project in the lab.”</span></p><p dir="ltr"><span>Sensing great promise in their research, the team is seeking a commercialization path to reach the real-world. “This nanomachine process that we developed could be substantially faster than anything else in the industry,” Johnson noted. “There is a clear market need: biotech and pharmaceutical companies wait weeks for their large DNA strands, and that slows down research.”&nbsp;</span></p><p dir="ltr"><span>According to early market analysis, these companies would be willing to pay more to get their DNA faster. “We've identified that the gene synthesis market would benefit most because they need the longest DNA, they need it the fastest and they're willing to pay the most for it,” Johnson said.&nbsp;</span></p><div class="row ucb-column-container"><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-11/Nanosynth%20equipment%201.JPG?itok=PUdiUXS2" width="1500" height="1001" alt="Living Matter Lab rack holding scientific equipment"> </div> </div><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-11/Nanosynth%20equipment%202.JPG?itok=PVnqLlXK" width="1500" height="1001" alt="Living Matter Lab scientist holding small sample case"> </div> </div><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-11/Living%20Matter%20Lab%20Equipment.JPG?itok=-rCuzloX" width="1500" height="1001" alt="3D printed lab equipment"> </div> </div></div><p dir="ltr"><br><span>Alistar also noted potential in cell-free research. “A lot of development in biology goes toward using merely DNA, not living cells. Applications are mostly in vaccines.” If, for example, you could more quickly make a vaccine even in remote places, that could have major implications for global health.</span></p><p dir="ltr"><span>To commercialize this research, Alistar and Johnson are pursuing “a lot of support from Ҵýƽ and the state of Colorado in getting to an actual product,” Alistar explained. “If everything goes right, we're gonna be enrolled in a national-level program for two months of customer discovery research.”&nbsp;</span></p><p dir="ltr"><span>The team hopes to demonstrate market feasibility of their new synthesis method within three years to improve one of the main bottlenecks in biotech research and help smooth the way toward improved vaccines, gene therapy and more personalized medicine.</span></p><div class="row ucb-column-container"><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-11/Alistar%20Living%20Matter%20Lab.JPG?itok=QM1esBAK" width="1500" height="1001" alt="Mirela Alistar in lab coat with equipment"> </div> </div><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-11/Johnson%20Living%20Matter%20Lab.JPG?itok=ZBt9Wuon" width="1500" height="1001" alt="Joshua Johnson in a lab coat working with lab equipment"> </div> <p>&nbsp;</p></div></div></div> </div> </div> </div> </div> <div>Living Matter Lab designs nanorobots for DNA production to speed biomedical research.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Tue, 18 Nov 2025 17:39:32 +0000 Michael Kwolek 5153 at /atlas Minds in rhythm /atlas/minds-rhythm <span>Minds in rhythm</span> <span><span>Michael Kwolek</span></span> <span><time datetime="2025-11-11T09:33:16-07:00" title="Tuesday, November 11, 2025 - 09:33">Tue, 11/11/2025 - 09:33</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/2025-11/Brain%20Music%20String%20Quartet%202.JPG?h=82f92a78&amp;itok=-iLoo4fD" width="1200" height="800" alt="Violinists with EEG caps"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/855"> Feature News </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/364" hreflang="en">CTD</a> <a href="/atlas/taxonomy/term/1464" hreflang="en">brainmusic</a> <a href="/atlas/taxonomy/term/771" hreflang="en">phd</a> <a href="/atlas/taxonomy/term/1426" hreflang="en">phd student</a> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> </div> <a href="/atlas/michael-kwolek">Michael Kwolek</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> <div> <div class="align-right image_style-small_500px_25_display_size_"> <div class="imageMediaStyle small_500px_25_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/small_500px_25_display_size_/public/2025-11/Thiago%20Roque.png?itok=ILrvxSlD" width="375" height="592" alt="Thiago Roque"> </div> </div> <p dir="ltr"><span>Imagine the cacophony of a conversation in which everyone talks, listens and responds at the same time.&nbsp;</span></p><p dir="ltr"><span>Trained musicians performing together can make a similar set of sensory inputs and brain activity truly resonate. Though a feature of the human experience for thousands of years, interbrain synchronization when playing music is not well understood.</span></p><p dir="ltr"><span>As a member of the&nbsp;</span><a href="/atlas/brain-music-lab" rel="nofollow"><span>Brain Music Lab</span></a><span>, ATLAS PhD student&nbsp;</span><a href="/atlas/thiago-roque" rel="nofollow"><span>Thiago Roque</span></a><span> has developed novel techniques for studying these nuanced dynamics with the aim to expand our understanding not only of musical performance, but also of human-to-human collaboration and connection more broadly.</span></p><p dir="ltr"><span>In his teens, Roque fell in love with music while beginning to develop his engineering skills. “I always wanted to be an engineer because I wanted to understand how things work, mostly toys and mechanics, electrical stuff,” he said, “but at that point, I also wanted to understand music.”</span></p><p dir="ltr"><span>When he got his first electronic keyboard, he realized, “An electrical engineer designed this to make music, so I realized that I could connect both things.”&nbsp;</span></p><p dir="ltr"><span>After earning BS and MS degrees in electrical engineering at University of Campinas in Brazil, Roque came to study with&nbsp;</span><a href="/atlas/grace-leslie" rel="nofollow"><span>Grace Leslie</span></a><span> at Georgia Tech, then transferred to Ҵýƽ when Leslie opened her Brain Music Lab in the ATLAS Institute.</span></p><p dir="ltr"><span>“Thiago has been a really integral part of the Brain Music Lab,” Leslie noted. “A lot of that has to do with his engineering background—it's rare to find graduate students who have the musical sophistication to be working on these projects and can rise to the occasion when it comes to developing custom technology for the research questions that we have.”</span></p><p dir="ltr"><span><strong>Studying brains in motion</strong></span></p><p dir="ltr"><span>Analyzing brain activity in moving bodies is surprisingly challenging—standard EEG data is captured in subjects who remain still.&nbsp;</span></p><p dir="ltr"><span>Roque has studied how dancers’ brains sync when they perform together, using his electrical engineering background to develop ways to improve the quality of EEG data in moving subjects.&nbsp;</span></p><p dir="ltr"><span>To compensate for all the action involved, he sewed motion sensors into the EEG caps and modified hardware to read neck and eye movement to improve data quality. This led to more ambitious plans with an even higher degree of difficulty.</span></p><p dir="ltr"><span><strong>The string ensemble experiment</strong></span></p><p dir="ltr"><span>Having dreamed for years of being able to analyze a string quartet performing a piece of music, Roque explained, “we needed all the equipment to be precisely synchronized, so we had to design this hardware that sends triggers and synchronizes everything. I designed and assembled the printed circuit boards myself.”</span></p><p dir="ltr"><span>He spent months incorporating off-the-shelf EEG equipment, accelerometers and other sensors with custom-designed components to normalize the data and sync it between all the musicians.</span></p><div class="row ucb-column-container"><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-11/Brain%20Music%20String%20Quartet%201.JPG?itok=18Cp8IB4" width="1500" height="1001" alt="string quartet with EEG monitors and researchers around them"> </div> </div><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-11/Brain%20Music%20String%20Quartet%202.JPG?itok=tipHYydB" width="1500" height="1001" alt="Violinists with EEG caps"> </div> </div><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-11/Brain%20Music%20String%20Quartet%207.JPG?itok=axCuYWbF" width="1500" height="1000" alt="string quartet with EEG caps listens to music with their eyes closed"> </div> </div></div><p dir="ltr"><br><span>The next step was finding a quartet willing to participate in the experiment. Luckily, Ҵýƽ&nbsp;</span><a href="/music/" rel="nofollow"><span>College of Music</span></a><span>—across the street from the ATLAS Institute—is home to several student quartets, including the ensemble that ultimately agreed to participate.&nbsp;</span></p><p dir="ltr"><span>Roque said, “We wanted to work with students here because we know they will have regular rehearsals. They will have just met each other at the beginning of the semester, so they are new to it. We are planning to measure them at the end of the semester so we can see the progress, how they develop.” &nbsp;</span></p><div class="ucb-box ucb-box-title-left ucb-box-alignment-right ucb-box-style-fill ucb-box-theme-lightgray"><div class="ucb-box-inner"><div class="ucb-box-title">The research team</div><div class="ucb-box-content"><p><span>This project has not been a solo gig.&nbsp;</span><a href="/atlas/daniel-ethridge" rel="nofollow"><span>Daniel Ethridge</span></a><span>,&nbsp;</span><a href="/atlas/daniel-llamas-maldonado" rel="nofollow"><span>Daniel Llamas Maldonado</span></a><span> and&nbsp;</span><a href="/atlas/sophia-mehdizadeh" rel="nofollow"><span>Sophia Mehdizadeh</span></a><span> from the Brain Music Lab—as well as several master’s and undergraduate students—have been instrumental in executing the string quartet research.</span></p></div></div></div><p dir="ltr"><span><strong>An interdisciplinary performance&nbsp;</strong></span></p><p dir="ltr"><span>For Roque, the ATLAS Institute offers several unique elements that make this type of research possible. “It's an interdisciplinary environment that fosters challenging research with high risks but potentially high payouts, and it's a very creative place,” he noted.&nbsp;</span></p><p dir="ltr"><span>“Thinking about the University of Colorado, I had this opportunity to enroll in this&nbsp;</span><a href="/ics/graduate-programs/cognitive-neuroscience-triple-phd" rel="nofollow"><span>triple PhD program</span></a><span>. I'm getting a PhD in creative technology and design, neuroscience and cognitive science.”&nbsp;</span></p><p dir="ltr"><span>Leslie explained how this research fits into the Brain Music Lab’s larger mission: “While we are focusing on technology and developing new technology and studying how humans interface with it, what sets us apart is our focus on the really human element to it.”</span></p><p dir="ltr"><span><strong>The next movement</strong></span></p><p dir="ltr"><span>Roque aims to continue studying this young quartet to determine if their brain activity syncs more thoroughly as they continue to perform together. He would also like to study graduate musicians and seasoned professionals to learn how interbrain coupling may change based on the experience level of the musicians.</span></p><div class="ucb-box ucb-box-title-left ucb-box-alignment-right ucb-box-style-fill ucb-box-theme-lightgray"><div class="ucb-box-inner"><div class="ucb-box-title"><span>Expanding the scope</span></div><div class="ucb-box-content"><p><span>Brain Music Lab director, Grace Leslie, recently performed a solo improvisational piece,&nbsp;</span><a href="/atlas/inside-tank" rel="nofollow"><span>Inside the Tank</span></a><span>, in the B2 Black Box Theater, integrating EEG headset and body sensors.</span></p><p><span>The lab team also outfitted several audience members with EEG monitors, giving Roque additional data to study the physiological responses of those experiencing live music.</span></p></div></div></div><p dir="ltr"><span>Roque also looks forward to bringing this technology to the stage. Plans are in the works for a string quartet performance in the spring semester with a huge visualization of live physiological data to give the audience a sense of the musicians’ synchronization.</span></p><p dir="ltr"><span>“A lot of it is developing this technology that we hopefully can use in the future to continue to study musical group dynamics,” Leslie said, “but there's also this human-computer interaction application where he's done some of the foundational research to show that we can develop brain-computer interfaces that can be social.”</span></p><p dir="ltr"><span>This research may reveal insights as to how human connection and collaboration work. Over time, it could lead to tools and techniques to improve our ability to sync with each other when working on complex tasks—whether that means performing in a string quartet, playing a team sport or simply holding a nuanced conversation.</span></p> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-11/Brain%20Music%20String%20Quartet%204.jpg?itok=5bpEe5cB" width="1500" height="844" alt="string quartet with EEG monitors and researchers around them"> </div> <p>&nbsp;</p></div> </div> </div> </div> </div> <div>ATLAS PhD student studies how brain activity syncs when musicians perform together.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Tue, 11 Nov 2025 16:33:16 +0000 Michael Kwolek 5152 at /atlas Ҵýƽ graduate student named a Google PhD fellow /atlas/2025/10/27/cu-boulder-graduate-student-named-google-phd-fellow <span>Ҵýƽ graduate student named a Google PhD fellow</span> <span><span>Michael Kwolek</span></span> <span><time datetime="2025-10-27T10:02:04-06:00" title="Monday, October 27, 2025 - 10:02">Mon, 10/27/2025 - 10:02</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/people/hye-young_jo.jpg?h=b48a80bf&amp;itok=EJYE0kLu" width="1200" height="800" alt> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/855"> Feature News </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/771" hreflang="en">phd</a> <a href="/atlas/taxonomy/term/1426" hreflang="en">phd student</a> <a href="/atlas/taxonomy/term/1531" hreflang="en">programmable</a> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> </div> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> </div> </div> </div> </div> <div>Hye-Young Jo of computer science and the ATLAS Institute will be using the funding to research human-computer interactions.</div> <script> window.location.href = `/graduateschool/2025/10/24/cu-boulder-graduate-student-named-google-phd-fellow`; </script> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Mon, 27 Oct 2025 16:02:04 +0000 Michael Kwolek 5150 at /atlas Minority Report-inspired interface allows us to explore time in new ways /atlas/minority-report-inspired-interface-allows-us-explore-time-new-ways <span>Minority Report-inspired interface allows us to explore time in new ways</span> <span><span>Michael Kwolek</span></span> <span><time datetime="2025-09-26T08:48:13-06:00" title="Friday, September 26, 2025 - 08:48">Fri, 09/26/2025 - 08:48</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/2025-09/Proteus%204%20small.jpeg?h=82f92a78&amp;itok=7N7ZG_V5" width="1200" height="800" alt="People standing in a dark theater with various time lapse videos projected around them"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/855"> Feature News </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/396" hreflang="en">ACME</a> <a href="/atlas/taxonomy/term/1097" hreflang="en">B2</a> <a href="/atlas/taxonomy/term/771" hreflang="en">phd</a> <a href="/atlas/taxonomy/term/1426" hreflang="en">phd student</a> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> </div> <a href="/atlas/michael-kwolek">Michael Kwolek</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> <div><p dir="ltr"><span>Imagine a scene—a bird feeder on a summer afternoon, the dark of night descending over the Flatirons, a fall day on a university campus. Now imagine moving backwards and forwards through time on a single aspect of that setting while everything else remains. One ATLAS engineer is building technology that lets us experience multiple time scales all at once.</span></p><div class="ucb-box ucb-box-title-center ucb-box-alignment-right ucb-box-style-fill ucb-box-theme-lightgray"><div class="ucb-box-inner"><div class="ucb-box-title">The Proteus Team</div><div class="ucb-box-content"><p><em><span>David Hunter developed Proteus with expertise from ACME Lab members </span></em><a href="/atlas/suibi-che-chuan-weng" data-entity-type="external" rel="nofollow"><em><span>Suibi Weng</span></em></a><em><span>, </span></em><a href="/atlas/rishi-vanukuru" data-entity-type="external" rel="nofollow"><em><span>Rishi Vanukuru</span></em></a><em><span>, </span></em><a href="/atlas/anika-mahajan" data-entity-type="external" rel="nofollow"><em><span>Annika Mahajan</span></em></a><em><span>, </span></em><a href="/atlas/yi-ada-zhao" data-entity-type="external" rel="nofollow"><em><span>Ada Zhao</span></em></a><em><span> and </span></em><a href="/atlas/shih-yu-leo-ma" data-entity-type="external" rel="nofollow"><em><span>Leo Ma</span></em></a><em><span>, and advising from professor and lab director </span></em><a href="/atlas/ellen-yi-luen-do" data-entity-type="external" rel="nofollow"><em><span>Ellen Do</span></em></a><em><span>.&nbsp;</span></em></p><p><a href="/atlas/brad-gallagher-0" data-entity-type="external" rel="nofollow"><em><span>Brad Gallagher</span></em></a><em><span> and </span></em><a href="/atlas/chris-petillo" data-entity-type="external" rel="nofollow"><em><span>Chris Petillo</span></em></a><em><span> in the B2 Center for Media, Arts and Performance provided critical technical support to make the project come alive in the B2 Black Box Studio.</span></em></p></div></div></div><p dir="ltr"><span>“Proteus: Spatiotemporal Manipulations” by </span><a href="/atlas/david-hunter" data-entity-type="external" rel="nofollow"><span>David Hunter</span></a><span>, ATLAS PhD student, in collaboration with his ACME Lab colleagues, allows people to simultaneously observe different moments in time through a full-scale interactive experience combining video projection, motion capture, audio and cooperative elements.&nbsp;</span></p><p dir="ltr"><span>The project was shown through the creative residency program in the&nbsp;</span><a href="/atlas/b2-center-media-arts-performance" rel="nofollow"><span>B2 Center for Media, Art and Performance</span></a><span> at the Roser ATLAS Center. Housed in the B2 Black Box Studio, the project team used 270-degree video projections, a spatial audio array and motion capture technology, creating an often larger-than-life way to explore many different time scales at once. Simultaneous projections highlighted the lifecycle of a bacteria in a petri dish, a day in the life of a street corner on campus, and weather patterns at a global scale among other vignettes.</span></p><p dir="ltr"><span>This tangible manipulation of time within a space can feel disorienting at first. It takes a while to adjust to what is happening, a unique sensation that has an expansive, almost psychedelic quality. But it may also have practical applications.</span></p><p dir="ltr"><span>We spoke to Hunter about the inspiration behind Proteus, possible use cases and what comes next.&nbsp;</span></p><p dir="ltr"><em><span>This Q&amp;A has been lightly edited for length and clarity.</span></em></p> <div class="align-right image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/medium_750px_50_display_size_/public/2025-09/Proteus%204%20small.jpeg?itok=qsCGqivd" width="750" height="500" alt="People standing in a dark theater with various time lapse videos projected around them"> </div> <span class="media-image-caption"> <p><em>Several people can explore different moments in time simultaneously.</em></p> </span> </div> <p dir="ltr"><span><strong>Tell us about the inspiration for Proteus.</strong></span></p><p dir="ltr"><span>There are scenes and settings where you want to see a place or situation at two distinct time frames, and for that comparison to not necessarily be hard-edged or side by side. You want it to be interpolated through time, so you can see patterns of change in space over that time.&nbsp;</span></p><p dir="ltr"><span><strong>Describe the early iterations of this project.</strong></span></p><p dir="ltr"><span>It was originally a tabletop setup with projection over small robots, and you could manipulate the robots to produce similar kinds of effects. There was a version running with a camera giving you a live video feed, but we switched it to curated videos as it was easier to understand what was happening with time manipulation. You're potentially making quite a confusing image for yourself, so the robots gave you something tangible to hold on to.&nbsp;</span></p><p dir="ltr"><span><strong>How did it evolve into the large-scale installation it is now?</strong></span></p><p dir="ltr"><span>We thought it would be interesting to see this at a really large scale in the B2, and the creative residencies made that possible. That's when we moved away from robots and it was like, “Well, how would you control it in this sort of space?”&nbsp;</span></p><p dir="ltr"><span>I took a projection mapping course last semester and worked on a large-scale projection, but then we changed it to hand interaction—gesture-based in the air, kind of like “Minority Report.”&nbsp;</span></p> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-09/Proteus%201%20small.jpg?itok=FAO84sF0" width="1500" height="1001" alt="people in a dark theater with a huge projection of space around them"> </div> <span class="media-image-caption"> <p><em>As you closer or farther from the screen, you change the time scale of part of the scene.</em></p> </span> <p dir="ltr"><span><strong>How do you describe what takes place when people interact with Proteus?</strong></span></p><p dir="ltr"><span>The key is interaction—people can actually control the time lapse. Usually, time lapses are linear, not two dimensional, and we don't have control over them. Here, you can focus on what interests you across different time periods, or hold two points in time side by side to see patterns and relationships as they change across space and time. This also enables multiple visitors to find the things they are interested in; there isn't one controller of the scene, it is collaborative.</span></p><p dir="ltr"><span><strong>What are some of the use cases you’ve thought about for this technology?</strong></span></p><p dir="ltr"><span>Anything with a geospatial component—a complex scene where many things are happening at once. You might want to keep track of something happening in the past while still tracking something else happening at another time.&nbsp;</span></p><p dir="ltr"><span>You can use these portals to freeze multiple bits of action or set them up to visualize where things have gone at different points in time and space.&nbsp;</span></p><p dir="ltr"><span>It's always been about collaboration—situation awareness where lots of people are trying to interrogate one image and see what everyone else is doing at the same time.&nbsp;</span></p><p dir="ltr"><span>Then there’s film analysis: Can we put in a whole movie and perhaps you can find interesting relationships and compositions around that?&nbsp;This could be a fun way to spatially explore a narrative, too.</span></p><p dir="ltr"><span>We're also looking at how it could be used for mockup design. Let's say you're prototyping an app and you have 50 different variations. You could collapse those all into one space, interrogate them through “time,” then mix and match different portions of your designs to come up with new combinations. It also works with volumetric images like body scans, where we swap time for depth.</span></p><p dir="ltr"><span><strong>What are the creative influences that drove the visual style of the piece?&nbsp;</strong></span></p><p dir="ltr"><span>I've long been interested in time lapses, like skateboard photography where multiple snapshots are overlaid on the same space as a single image.</span></p><p dir="ltr"><span>There's all this work by Muybridge and Marey, who invented chronophotography. That's how they worked out that horses leave the ground while they run.&nbsp;</span></p><p dir="ltr"><span>David Hockney did a ton of Polaroid work. There's a famous one of people playing Scrabble, shot from his perspective. All these different Polaroids are stuck down next to each other, not as a true representation of space but as a way of capturing time within that space—breaking the unity of the image.&nbsp;</span></p><p dir="ltr"><span>Khronos Projector by Alvaro Cassinelli kicked off this research sprint and prompted me to look back at my interest in time and photography.</span></p><div class="row ucb-column-container"><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-09/Proteus%202%20small.jpg?itok=ydo01cdB" width="1500" height="1001" alt="Wrap-around screen projection various images"> </div> <span class="media-image-caption"> <p><em>Several time lapse videos are projected simultaneously on the Black Box Studio's 270-degree screen.</em></p> </span> </div><div class="col ucb-column"> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-09/Proteus%203%20small.jpg?itok=jnFoaYNr" width="1500" height="1001" alt="Proteus controllers"> </div> <span class="media-image-caption"> <p><em>Proteus is controlled with an app on a mobile phone modified to work with motion capture.</em></p> </span> </div></div><p dir="ltr"><span><strong>It does force your brain to work in a way that it is not used to, which is a really cool thing to happen in a creative sense but also in a technical sense.</strong></span></p><p dir="ltr"><span>We aren't used to seeing anything with non-uniform time. Whenever we're watching a video, we want to find a point in the video, then we see the whole image rewind or fast forward. Of course that makes sense in a lot of situations, but there could be interesting use-cases for interactive non-uniform time.</span></p><p dir="ltr"><span><strong>What makes ATLAS an ideal place for this type of research?</strong></span></p><p dir="ltr"><span>There's all the people, whether it's research faculty who are interested in asking questions like, “How can we make a novel system or improve research that's going on in this area?” Or it's super strong technical expertise, which is like, “Let's have a go at making this work in a projection environment.”</span></p><p dir="ltr"><span><strong>What’s next for the Proteus project?</strong></span></p><p dir="ltr"><span>At the moment, I can only compare by changing the time on a region of space and I can compare a region of space against another region at different times. But it might be interesting to be able to break the image and say, “Actually, I want to clone this region and see it from a different time period.” Can I reconstitute the image in some way?</span></p><div class="row ucb-column-container"><div class="col ucb-column"><div class="ucb-box ucb-box-title-left ucb-box-alignment-none ucb-box-style-fill ucb-box-theme-lightgray"><div class="ucb-box-inner"><div class="ucb-box-title">Experience Proteus during <a href="/researchinnovation/week" data-entity-type="external" rel="nofollow">Research &amp; Innovation Week</a></div><div class="ucb-box-content"><p><span>Proteus will be running in the B2 Black Box Studio as part of:&nbsp;</span><br><br><a href="/atlas/research-open-labs-2025" rel="nofollow"><span><strong>ATLAS Research Open Labs</strong></span></a><br><span>Roser ATLAS Center</span><br><span>October 10, 2025</span><br><span>3-5pm</span><br><span>FREE, no registration needed</span></p></div></div></div></div><div class="col ucb-column"><p>&nbsp;</p></div></div></div> </div> </div> </div> </div> <div>ATLAS PhD student David Hunter researches novel ways to interact with different moments in time across a single video stream.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Fri, 26 Sep 2025 14:48:13 +0000 Michael Kwolek 5139 at /atlas New research explores tinkering as a key classroom learning method /atlas/new-research-explores-tinkering-key-classroom-learning-method <span>New research explores tinkering as a key classroom learning method</span> <span><span>Michael Kwolek</span></span> <span><time datetime="2025-07-16T10:21:34-06:00" title="Wednesday, July 16, 2025 - 10:21">Wed, 07/16/2025 - 10:21</time> </span> <div> <div class="imageMediaStyle focal_image_wide"> <img loading="lazy" src="/atlas/sites/default/files/styles/focal_image_wide/public/2025-07/Ranjan%20cartoonimator.jpg?h=71976bb4&amp;itok=lLOFZRwo" width="1200" height="800" alt="Kirthik Ranjan presents Cartoonimator"> </div> </div> <div role="contentinfo" class="container ucb-article-categories" itemprop="about"> <span class="visually-hidden">Categories:</span> <div class="ucb-article-category-icon" aria-hidden="true"> <i class="fa-solid fa-folder-open"></i> </div> <a href="/atlas/taxonomy/term/703"> Feature </a> <a href="/atlas/taxonomy/term/855"> Feature News </a> <a href="/atlas/taxonomy/term/144"> News </a> </div> <div role="contentinfo" class="container ucb-article-tags" itemprop="keywords"> <span class="visually-hidden">Tags:</span> <div class="ucb-article-tag-icon" aria-hidden="true"> <i class="fa-solid fa-tags"></i> </div> <a href="/atlas/taxonomy/term/396" hreflang="en">ACME</a> <a href="/atlas/taxonomy/term/771" hreflang="en">phd</a> <a href="/atlas/taxonomy/term/1426" hreflang="en">phd student</a> <a href="/atlas/taxonomy/term/773" hreflang="en">research</a> </div> <a href="/atlas/michael-kwolek">Michael Kwolek</a> <div class="ucb-article-content ucb-striped-content"> <div class="container"> <div class="paragraph paragraph--type--article-content paragraph--view-mode--default"> <div class="ucb-article-text" itemprop="articleBody"> <div><p dir="ltr"><span>When kids tinker in the classroom, they get to build many useful skills from computing to collaboration to creativity and more.&nbsp;</span></p> <div class="align-right image_style-small_500px_25_display_size_"> <div class="imageMediaStyle small_500px_25_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/small_500px_25_display_size_/public/2025-07/Ranjan%20cartoonimator.jpg?itok=6ghrbn_w" width="375" height="281" alt="Kirthik Ranjan presents Cartoonimator"> </div> </div> <p dir="ltr"><a href="/atlas/krithik-ranjan" rel="nofollow"><span>Krithik Ranjan</span></a><span>, PhD student and member of the&nbsp;</span><a href="/atlas/acme-lab" rel="nofollow"><span>ACME Lab</span></a><span>, studies low-cost forms of human-computer interaction that enable more people to explore their creativity through technology. And tinkering plays a big part in that.</span></p><p dir="ltr"><span>Ranjan presented his work at&nbsp;</span><a href="https://constructionism2025.inf.ethz.ch/" rel="nofollow"><span>Constructionism 2025</span></a><span> conference in Zurich, Switzerland, which explores how constructionist ideas can inspire advancements in learning technologies and methodologies. He filled us in on what he presented.</span></p><p dir="ltr"><span><strong>Tell us about your research focus.</strong></span></p><p dir="ltr"><span>I've been trying to build ways for people to create with technology in a more open-ended, tinkering-friendly way. Tinkering is a way to learn where you can explore, you can experiment, you can playfully interact with things to learn a concept, whether it is computer science, physics, astronomy or anything like that.</span></p><p dir="ltr"><span><strong>Can you describe the intention behind your paper on “</strong></span><a href="https://constructionism.oapublishing.ch/index.php/con/article/view/26" rel="nofollow"><span><strong>The Design Space of Tangible Interfaces for Computational Tinkerability</strong></span></a><span><strong>”?</strong></span></p><p dir="ltr"><span>There are two elements to it. One is the tangible side where the idea is that you're interacting with computational elements in the physical space, either stuff like paper or robots or different components that you can put together. And the other aspect is computational tinkerability, that playful open-ended aspect about creating something computationally.</span></p><p dir="ltr"><span>I want to understand how people have previously developed design spaces, which is this concept of a framework to understand what people have done, what it means, and how we can analyze and categorize different types of projects in that space. I reviewed 33 different projects to figure out: What are kids tinkering with? What are children making? And how are they making it?</span></p><p dir="ltr"><span>This project was from the perspective of a designer to inform future designers who are going to create such interfaces and projects.</span></p><p dir="ltr"><span><strong>What did you discover in conducting this research?</strong></span></p><p dir="ltr"><span>We figured out there is a range of how tinkerable or how expressive an interface can be, so we try to categorize that based on a “spectrum of tinkerability.”&nbsp;</span></p><p dir="ltr"><span>The other important takeaway is the idea of expanding beyond code. There's so much work, both commercially and in research, around enabling students to code. But a lot of researchers also found that this line-by-line type of programming is also a bit discouraging to students from underrepresented groups. So there's a lot of work in expanding the ways you can create with computers to not just rewrite lines of code to program or make a game or make a 3D model. But more diverse ways that suit different interests.&nbsp;</span></p> <div class="imageMediaStyle large_image_style"> <img loading="lazy" src="/atlas/sites/default/files/styles/large_image_style/public/2025-07/tinkerability%20ranjan%20spectrum.jpeg?itok=W_xwSrPw" width="1500" height="1469" alt="Spectrum of Tinkerability chart"> </div> <p dir="ltr"><span><strong>Who might be the audience for this research and what might they do with it?</strong></span></p><p dir="ltr"><span>The goal was to categorize this space so people can refer to it and design based on that. The audience is other researchers and designers of interfaces for learning with computers. Based on the implications, they can better design ways that students learn by making [tools] more expressive, more open-ended, learner-driven, and catering to different interests instead of just code or just one type of way to interact.&nbsp;</span></p><p dir="ltr"><span><strong>A lot of your work is focused on using simple materials that are more accessible to students all over the world. How might this research help educators expand the tools they have access to for students?</strong></span></p><p dir="ltr"><span>In practical situations like classrooms and formal learning centers, there are always constraints with resources, with the number of people, with the kind of things you can get access to. And quite often we might find that projects, interfaces and tools in the market are usually one-off and they are a couple hundred dollars or more. So I was trying to look at these projects in terms of the kind of materials they use and how they enable people to interact with the material.</span></p><p dir="ltr"><span>There are some projects like [ATLAS PhD] Ruhan Yang's&nbsp;</span><a href="/atlas/pabo-bot-paper-box-robots-everyone" rel="nofollow"><span>Paper Robots</span></a><span> and a couple other projects that we looked at where the focus was DIY-based interfaces that educators can fabricate themselves for their classrooms. These projects stressed on publishing the plans and instructables and stuff like that online so that anybody can use them to build these interactive interfaces themselves.</span></p><p dir="ltr"><span>And part of this was also using platforms that are already available. Arduino, Microbit and Raspberry Pi are commonly used electronic platforms in education for many different purposes. There's a way to make these interfaces more accessible if you use those existing platforms instead of making custom electronics.</span></p> <div class="align-right image_style-medium_750px_50_display_size_"> <div class="imageMediaStyle medium_750px_50_display_size_"> <img loading="lazy" src="/atlas/sites/default/files/styles/medium_750px_50_display_size_/public/article-image/smart_cartoonimator.jpg?itok=gz074D4z" width="750" height="534" alt="Cartoonimator key frame components and smartphone app"> </div> </div> <p dir="ltr"><span><strong>How does a student who uses your&nbsp;</strong></span><a href="/atlas/cartoonimator" rel="nofollow"><span><strong>Cartoonimator</strong></span></a><span><strong> tool, for example, learn in the process of figuring out how to use and then make animations?</strong></span></p><p dir="ltr"><span>This idea of tinkering and learning by making is based on these learning philosophies of constructionism. The idea is that you're learning by building artifacts, building mental models yourself instead of being instructed by somebody else.</span></p><p dir="ltr"><span>A big part is the idea of being stuck and then trying to work through the problem, trying to figure out what's wrong, what needs to change and get something working. This way of learning is focused on the learner's motivation.</span></p><p dir="ltr"><span><strong>What's next for this research?</strong></span></p><p dir="ltr"><span>Cartoonimator is one example where I looked at these principles about expanding beyond code and working with more accessible materials to build an interface that's open ended and tinkering-friendly for learning something like animation.</span></p><p dir="ltr"><span>I'm looking further at how we can engage students with physical computing using paper, because that is more accessible, easier to expand on and gives you this space of creative exploration that you may not usually have with devices.&nbsp;</span></p><p dir="ltr"><span>If you're new with technology, you might be afraid of breaking something apart, but that's really a core part of tinkering, so I'm looking at how paper-based interfaces can foster the idea.</span></p></div> </div> </div> </div> </div> <div>PhD student Krithik Ranjan analyzed 33 student learning tools and developed a “spectrum of tinkerability” that offers designers new ways to think about teaching computational skills.</div> <h2> <div class="paragraph paragraph--type--ucb-related-articles-block paragraph--view-mode--default"> <div>Off</div> </div> </h2> <div>Traditional</div> <div>0</div> <div>On</div> <div>White</div> Wed, 16 Jul 2025 16:21:34 +0000 Michael Kwolek 5102 at /atlas