User Tools

Site Tools


retreats:2024spring:agenda

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
retreats:2024spring:agenda [2024/04/15 17:00]
kilov
retreats:2024spring:agenda [2024/04/23 17:53] (current)
madelon [Morning] Removed slides due to WIP nature
Line 10: Line 10:
 Abstract: We are in the midst of a Cambrian explosion of generative AI-enabled user experiences in research and industry. Much of the user interaction with such models has arguably focused on straightforward wrappers for interacting *with* powerful models: UIs collect text prompts for large language models and show text results; or take text input and return images for text-to-image models; etc. We introduce a complementary perspective of interacting *through* generative AI models, by introducing systems that translate information useful for user interaction to (and from) a format appropriate for these models. We call these systems UI Transducers. I will provide an initial characterization of the space of such applications based on a number of examples from our research group. Abstract: We are in the midst of a Cambrian explosion of generative AI-enabled user experiences in research and industry. Much of the user interaction with such models has arguably focused on straightforward wrappers for interacting *with* powerful models: UIs collect text prompts for large language models and show text results; or take text input and return images for text-to-image models; etc. We introduce a complementary perspective of interacting *through* generative AI models, by introducing systems that translate information useful for user interaction to (and from) a format appropriate for these models. We call these systems UI Transducers. I will provide an initial characterization of the space of such applications based on a number of examples from our research group.
 Afterwards, I will raise a more fundamental question about the role of generative AI in interactive computing systems: Do we even know what we want from these systems? A key implicit assumption of many tools is that the user knows what they want, and they just need appropriate software at the right level of abstraction to specify their goal. However, we know from studying various creative domains that people often don't know what they want a priori. They have a vague, ambiguous idea and it's only through iterative engagement with a medium that they clarify their goal. Taking this perspective has implications for what our generative AI-powered assistants should do for us and how they should engage with us. Afterwards, I will raise a more fundamental question about the role of generative AI in interactive computing systems: Do we even know what we want from these systems? A key implicit assumption of many tools is that the user knows what they want, and they just need appropriate software at the right level of abstraction to specify their goal. However, we know from studying various creative domains that people often don't know what they want a priori. They have a vague, ambiguous idea and it's only through iterative engagement with a medium that they clarify their goal. Taking this perspective has implications for what our generative AI-powered assistants should do for us and how they should engage with us.
-  * 10:00-10:25 +  * 10:00-10:25 Shreya Shankar (20min) {{ :retreats:2024spring:evaluationassistants.pdf |“Scaling up “Vibe Checks” for Large Language Models”}} 
-Shreya Shankar (20min) “Scaling up “Vibe Checks” for Large Language Models” +  * 10:25-10.50 - coffee break
-  * 10:25-10.50 - coffee+
   * 10:50-12:05 (3 talks)   * 10:50-12:05 (3 talks)
-Chanwut (Mick) Kittivorawong (20min) “Spatialyze: A Geospatial Video Analytics System with Spatial-Aware Optimizations” +    * Chanwut (Mick) Kittivorawong (20min) {{ :retreats:2024spring:spatialyze-chanwut-kittivorawong.pdf |“Spatialyze: A Geospatial Video Analytics System with Spatial-Aware Optimizations”}} 
-Sarah Wooders/Charles Packer (20min) “MemGPT: An OS for LLMs” +    Sarah Wooders/Charles Packer (20min) “MemGPT: An OS for LLMs” 
-Madelon Hulsebos (15min) - “Revisiting Dataset Search Systems+    Madelon Hulsebos (15min) - “Revisiting Dataset Search Systems"
   * 12:05 pm - 1:30 pm: Lunch + Discussion Tables   * 12:05 pm - 1:30 pm: Lunch + Discussion Tables
  
Line 22: Line 21:
  
   * 1:30-2:45 (3 talks)   * 1:30-2:45 (3 talks)
-Shishir Patil (20min) - “Gorilla: Connecting LLMs to apps and services” +    * Shishir Patil (20min) - “Gorilla: Connecting LLMs to apps and services” 
-Gabriel Matute (20min) - “Supporting syntactic code search with incomplete code fragments" +    Gabriel Matute (20min) - “Supporting syntactic code search with incomplete code fragments" 
-Yiming Lin (20min) -  “TEXT-DB: Towards A Data Management System for Free Text”+    Yiming Lin (20min) -  {{ :retreats:2024spring:zendb-yiming_lin.pdf |“ZenDB: Towards A Data Management System for Free Text”}}
   * 2:45-3:00 - coffee   * 2:45-3:00 - coffee
   * 3.00-3.50: [Keynote] Eugene Wu - Associate Prof., Columbia, “Systems for Human Data Interaction”   * 3.00-3.50: [Keynote] Eugene Wu - Associate Prof., Columbia, “Systems for Human Data Interaction”
Line 30: Line 29:
   * 3.50-4:00 - break   * 3.50-4:00 - break
   * 4:00-5:30 (4 talks)   * 4:00-5:30 (4 talks)
-Tristan Chambers (20 min) “Evolving knowledge: LLM-powered data extraction from unstructured police violence and misconduct records” +    * Tristan Chambers (20 min) “Evolving knowledge: LLM-powered data extraction from unstructured police violence and misconduct records” 
-Tianjun Zhang (20 min) - “RAFT: teaching LLMs how to RAG” +    Tianjun Zhang (20 min) - “RAFT: teaching LLMs how to RAG” 
-Sabriya Alam (20 min) “SAGE: System for Accessible Guided Exploration of Health Information” +    Sabriya Alam (20 min) “SAGE: System for Accessible Guided Exploration of Health Information” 
-Parth Asawa (15 min) “Revisiting Prompt Engineering via Declarative Crowdsourcing”+    Parth Asawa (15 min) “Revisiting Prompt Engineering via Declarative Crowdsourcing”
   * 6:00 pm: Dinner   * 6:00 pm: Dinner
  
retreats/2024spring/agenda.1713225624.txt.gz · Last modified: 2024/04/15 17:00 by kilov