User Tools

Site Tools


retreats:2022fall:abstracts

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
retreats:2022fall:abstracts [2022/11/03 15:27]
kilov
retreats:2022fall:abstracts [2022/11/03 15:37] (current)
kilov
Line 5: Line 5:
   - **Document Organization Three Ways**\\ Despite advances in natural language processing, computer vision, and other techniques that simplify the processing of large, unstructured documents such as PDFs, present-day tools remain difficult to use. Many experts from non-technical domains continue to process large, messy document datasets manually, while others become self-taught programmers. For teams with limited time, budgets, and computing education, this is a heavy burden. Our study assesses the learnability of three categories of programming interaction for document processing: textual, visual, and programming-by-example. We conducted a counterbalanced within-subject study (n=12) in which participants used all three programming paradigms. Our qualitative analysis reveals patterns in their relative benefits, including how participants reported Visual programming paradigms gave them a broader understanding of their data. Our results suggest design opportunities for tools that aim to help domain experts complete programming tasks.\\ \\   - **Document Organization Three Ways**\\ Despite advances in natural language processing, computer vision, and other techniques that simplify the processing of large, unstructured documents such as PDFs, present-day tools remain difficult to use. Many experts from non-technical domains continue to process large, messy document datasets manually, while others become self-taught programmers. For teams with limited time, budgets, and computing education, this is a heavy burden. Our study assesses the learnability of three categories of programming interaction for document processing: textual, visual, and programming-by-example. We conducted a counterbalanced within-subject study (n=12) in which participants used all three programming paradigms. Our qualitative analysis reveals patterns in their relative benefits, including how participants reported Visual programming paradigms gave them a broader understanding of their data. Our results suggest design opportunities for tools that aim to help domain experts complete programming tasks.\\ \\
   - **Exploring the Learnability of Program Synthesizers by Novice Programmers**\\ Tools known as program synthesizers show promise to lighten the burden of programming by automatically writing code for users, but little research has addressed what contributes to and detracts from their learnability by novice programmers. For example:   - **Exploring the Learnability of Program Synthesizers by Novice Programmers**\\ Tools known as program synthesizers show promise to lighten the burden of programming by automatically writing code for users, but little research has addressed what contributes to and detracts from their learnability by novice programmers. For example:
-  * How do synthesizers' user interaction models affect their learnability? +    - Ordered List ItemHow do synthesizers' user interaction models affect their learnability? 
-  What kinds of inputs are least burdensome to learn to provide to the synthesizer? +    What kinds of inputs are least burdensome to learn to provide to the synthesizer? 
-  What common misconceptions do novice programmers demonstrate in their use of program synthesizers? +    What common misconceptions do novice programmers demonstrate in their use of program synthesizers? 
-  We observed novice programmers working with real, released program synthesizers to answer these questions and more. +    We observed novice programmers working with real, released program synthesizers to answer these questions and more.\\ From our analysis, we provide a set of design opportunities to inform the design of future program synthesizers. Our findings have ramifications for the use of program synthesis in data work.\\ \\
-From our analysis, we provide a set of design opportunities to inform the design of future program synthesizers. Our findings have ramifications for the use of program synthesis in data work.\\ \\+
   - **Always-on Visualization Recommendations**\\ Exploratory data science largely happens in computational notebooks with dataframe APIs, such as pandas, that support flexible means to transform, clean, and analyze data. Yet, visually exploring data in dataframes remains tedious, requiring substantial programming effort for visualization and mental effort to determine what analysis to perform next. We propose Lux, an always-on framework for accelerating visual insight discovery in dataframe workflows. When a dataframe is printed, Lux recommends visualizations to provide a quick overview of the patterns and trends and suggest promising analysis directions. Users can tailor recommendations via a lightweight intent language. Lux also leverages scalable data computation techniques to generate recommendations quickly. Lux has been embraced by data science practitioners -- and especially by novice data scientists -- with over 400K downloads and 4.2k stars on Github.\\ \\   - **Always-on Visualization Recommendations**\\ Exploratory data science largely happens in computational notebooks with dataframe APIs, such as pandas, that support flexible means to transform, clean, and analyze data. Yet, visually exploring data in dataframes remains tedious, requiring substantial programming effort for visualization and mental effort to determine what analysis to perform next. We propose Lux, an always-on framework for accelerating visual insight discovery in dataframe workflows. When a dataframe is printed, Lux recommends visualizations to provide a quick overview of the patterns and trends and suggest promising analysis directions. Users can tailor recommendations via a lightweight intent language. Lux also leverages scalable data computation techniques to generate recommendations quickly. Lux has been embraced by data science practitioners -- and especially by novice data scientists -- with over 400K downloads and 4.2k stars on Github.\\ \\
   - **Human-Centered Tools for Reliable Use of Machine Translation**\\ Although machine translation (MT) technology has been rapidly improving, actual user needs for these systems remain relatively poorly understood and, as a result, unmet. For example, current MT systems do not help users understand when they can rely on translations, or when the system has made an error. MT holds great potential to increase access to information and improve social interactions across languages. However, undetected mistranslations can cause serious harm, especially when MT is used in high stakes settings like healthcare. In this talk, I will discuss how we might develop MT tools that provide actionable, useful support for users to understand when translations are reliable and to recover or adapt when they are not. In ongoing work, we are developing tools to improve written cross-lingual communication in medical settings. By combining pre-translated phrases and machine translation we strive to provide clinicians with greater insight into and control over output accuracy when crafting instructions for patients.\\ \\   - **Human-Centered Tools for Reliable Use of Machine Translation**\\ Although machine translation (MT) technology has been rapidly improving, actual user needs for these systems remain relatively poorly understood and, as a result, unmet. For example, current MT systems do not help users understand when they can rely on translations, or when the system has made an error. MT holds great potential to increase access to information and improve social interactions across languages. However, undetected mistranslations can cause serious harm, especially when MT is used in high stakes settings like healthcare. In this talk, I will discuss how we might develop MT tools that provide actionable, useful support for users to understand when translations are reliable and to recover or adapt when they are not. In ongoing work, we are developing tools to improve written cross-lingual communication in medical settings. By combining pre-translated phrases and machine translation we strive to provide clinicians with greater insight into and control over output accuracy when crafting instructions for patients.\\ \\
Line 21: Line 20:
   - **A Conversational Interface for Automatic Visualization**\\ Generating visualizations is a key step in exploratory data analysis but can be time-consuming and complicated in no-code environments. Visualizations are also not static; as more information is discovered through exploratory data analysis, new visualizations need to be built to answer new questions. We introduce a conversational natural language interface for creating visualizations from data. Our approach is the first to use large language modeling for generating visualizations in a conversational setting.\\ \\   - **A Conversational Interface for Automatic Visualization**\\ Generating visualizations is a key step in exploratory data analysis but can be time-consuming and complicated in no-code environments. Visualizations are also not static; as more information is discovered through exploratory data analysis, new visualizations need to be built to answer new questions. We introduce a conversational natural language interface for creating visualizations from data. Our approach is the first to use large language modeling for generating visualizations in a conversational setting.\\ \\
   - **Iterative Design of Semantic Grouping Guidelines and Metrics for Mobile User Interfaces**\\ While prior research on widget grouping in mobile user interface (UI) design has focused on visual grouping, little work has been devoted to the semantic coherence of such groupings, which affects user understanding of the interface. We propose five design guidelines that are generally applicable for semantic element grouping in mobile UIs. We generated the guidelines through an iterative process: they were first conceived through empirical observations of existing mobile UIs and a literature review, refined through multiple rounds of feedback from UI design experts, and finally evaluated with an expert review. The feedback from experts indicate a strong need for these guidelines, as the design and evaluation of semantic grouping is currently conducted based on intuition. In addition to being a useful resource for UI design, these guidelines could lead to computational methods to evaluate interfaces. We experimented with computational metrics built from these guidelines that show promising results.\\ \\   - **Iterative Design of Semantic Grouping Guidelines and Metrics for Mobile User Interfaces**\\ While prior research on widget grouping in mobile user interface (UI) design has focused on visual grouping, little work has been devoted to the semantic coherence of such groupings, which affects user understanding of the interface. We propose five design guidelines that are generally applicable for semantic element grouping in mobile UIs. We generated the guidelines through an iterative process: they were first conceived through empirical observations of existing mobile UIs and a literature review, refined through multiple rounds of feedback from UI design experts, and finally evaluated with an expert review. The feedback from experts indicate a strong need for these guidelines, as the design and evaluation of semantic grouping is currently conducted based on intuition. In addition to being a useful resource for UI design, these guidelines could lead to computational methods to evaluate interfaces. We experimented with computational metrics built from these guidelines that show promising results.\\ \\
-  - **A Cross-Domain Need-Finding Study with Users of Geospatial Data**\\ Geospatial data—such as multispectral satellite imagery, geographically-enriched demographic data, and crowdsourced datasets like OpenStreetMap—is more available today than ever before. This data is playing an increasingly critical role in the work of Earth and climate scientists, social scientists, and data journalists exploring spatiotemporal change in our environment and societies. However, existing software and programming tools for geospatial analysis and visualization are challenging to learn and difficult to use. Many domain experts are unfamiliar with both the theory of geospatial data and the specialized Geographic Information System (GIS) software used to work with such data. While libraries for geospatial analysis and visualization are increasingly common in Python, R, and JavaScript, they still require proficiency with at least one of these programming languages in addition to geospatial data theory. In short, domain experts face steep challenges in gathering, transforming, analyzing, and visualizing geospatial data. +  - **A Cross-Domain Need-Finding Study with Users of Geospatial Data**\\ Geospatial data—such as multispectral satellite imagery, geographically-enriched demographic data, and crowdsourced datasets like OpenStreetMap—is more available today than ever before. This data is playing an increasingly critical role in the work of Earth and climate scientists, social scientists, and data journalists exploring spatiotemporal change in our environment and societies. However, existing software and programming tools for geospatial analysis and visualization are challenging to learn and difficult to use. Many domain experts are unfamiliar with both the theory of geospatial data and the specialized Geographic Information System (GIS) software used to work with such data. While libraries for geospatial analysis and visualization are increasingly common in Python, R, and JavaScript, they still require proficiency with at least one of these programming languages in addition to geospatial data theory. In short, domain experts face steep challenges in gathering, transforming, analyzing, and visualizing geospatial data.\\ The aim of this research is to investigate the specific computing needs of the diversifying community of geospatial data users. This poster will present findings from a contextual inquiry study (n = 25) with Earth and climate scientists, social scientists, and data journalists using geospatial data in their current work. We will focus on key challenges identified in our thematic analysis, including (1) finding and transforming geospatial data to satisfy spatiotemporal constraints, (2) understanding the behavior of geospatial operators, (3) tracking geospatial data provenance, and (4) efficiently exploring the cartographic design space. We will also discuss the design opportunities these findings suggest for new geospatial analysis and visualization systems.\\ \\
-The aim of this research is to investigate the specific computing needs of the diversifying community of geospatial data users. This poster will present findings from a contextual inquiry study (n = 25) with Earth and climate scientists, social scientists, and data journalists using geospatial data in their current work. We will focus on key challenges identified in our thematic analysis, including (1) finding and transforming geospatial data to satisfy spatiotemporal constraints, (2) understanding the behavior of geospatial operators, (3) tracking geospatial data provenance, and (4) efficiently exploring the cartographic design space. We will also discuss the design opportunities these findings suggest for new geospatial analysis and visualization systems.\\ \\+
   - **Striking a Balance: Reader Takeaways and Preferences when Integrating Text and Charts**\\ Visualizations frequently use text to guide and inform readers. Prior work in visualization research indicates that text has an influence on reader conclusions, but there is little empirical evidence supporting the best wait to integrate text and charts. Designers lack guidance around the proper amount of text to show, what content to use, and where to position it. Furthermore, personal preferences vary in regards to visual and textual representations.\\ In this study, we explored several research questions about the textual components of visualizations. 302 participants viewed univariate line charts with differing amounts of text. This text varied in content and position. Participants ranked charts according to preference, with stimuli ranging from charts with no text except axes labels to a full written paragraph. They also provided their conclusions from charts with only one or two pieces of text with varying content and position. From these responses, we found that participants prefer charts with a greater amount of text in comparison to charts with fewer pieces of text or text alone. We also found that the content of the text affects reader conclusions. For example, text that describes statistical or relational components of a chart leads to more takeaways referring to statistics or relational comparisons than text describing chart elements. Additionally, the effect of certain content depended on the placement of the text on the chart. Some content is best placed in the title, while other content should be placed close to the data. We compiled these results into four visualization design guidelines.\\ \\   - **Striking a Balance: Reader Takeaways and Preferences when Integrating Text and Charts**\\ Visualizations frequently use text to guide and inform readers. Prior work in visualization research indicates that text has an influence on reader conclusions, but there is little empirical evidence supporting the best wait to integrate text and charts. Designers lack guidance around the proper amount of text to show, what content to use, and where to position it. Furthermore, personal preferences vary in regards to visual and textual representations.\\ In this study, we explored several research questions about the textual components of visualizations. 302 participants viewed univariate line charts with differing amounts of text. This text varied in content and position. Participants ranked charts according to preference, with stimuli ranging from charts with no text except axes labels to a full written paragraph. They also provided their conclusions from charts with only one or two pieces of text with varying content and position. From these responses, we found that participants prefer charts with a greater amount of text in comparison to charts with fewer pieces of text or text alone. We also found that the content of the text affects reader conclusions. For example, text that describes statistical or relational components of a chart leads to more takeaways referring to statistics or relational comparisons than text describing chart elements. Additionally, the effect of certain content depended on the placement of the text on the chart. Some content is best placed in the title, while other content should be placed close to the data. We compiled these results into four visualization design guidelines.\\ \\
   - **Data cleaning for acronyms, abbreviations, and typos derived from manual entry **\\ In many no-code data tools, such as spreadsheets, users often manually fill values into cells. In this process, even values that refer to the same underlying concept can take on many forms, thanks to users introducing acronyms, abbreviations, and typos. Collapsing these values down to a canonical set for the purpose of data cleaning is a challenge. For example, public defender units we work with took multiple weeks to manually collapse the values (for columns such as police title or command) to a smaller canonical set. There is a need for an automated way to deal with acronyms, abbreviations, and typos, specifically a new metric that helps map values that refer to the same underlying concept to each other, taking into account acronyms, abbreviations, and typos. We also wanted to develop an efficient way to employ this metric to collapse the values down to a canonical set. We developed a new distance metric that preserves the “key” structures of a value - allowing values that refer to the same concept to be mapped together. For example, “School Resource Officer” would map to both “Sc Rs Off”, “SRO”, as well as “Scres off”. We further developed a dynamic programming algorithm that efficiently comes up with the score for two values, along with ways to prune poor matches without complete evaluation. We embedded our approach into the popular open source data cleaning tool OpenRefine, and demonstrated substantial improvements relative to the state-of-the-art.\\ \\   - **Data cleaning for acronyms, abbreviations, and typos derived from manual entry **\\ In many no-code data tools, such as spreadsheets, users often manually fill values into cells. In this process, even values that refer to the same underlying concept can take on many forms, thanks to users introducing acronyms, abbreviations, and typos. Collapsing these values down to a canonical set for the purpose of data cleaning is a challenge. For example, public defender units we work with took multiple weeks to manually collapse the values (for columns such as police title or command) to a smaller canonical set. There is a need for an automated way to deal with acronyms, abbreviations, and typos, specifically a new metric that helps map values that refer to the same underlying concept to each other, taking into account acronyms, abbreviations, and typos. We also wanted to develop an efficient way to employ this metric to collapse the values down to a canonical set. We developed a new distance metric that preserves the “key” structures of a value - allowing values that refer to the same concept to be mapped together. For example, “School Resource Officer” would map to both “Sc Rs Off”, “SRO”, as well as “Scres off”. We further developed a dynamic programming algorithm that efficiently comes up with the score for two values, along with ways to prune poor matches without complete evaluation. We embedded our approach into the popular open source data cleaning tool OpenRefine, and demonstrated substantial improvements relative to the state-of-the-art.\\ \\
retreats/2022fall/abstracts.1667514444.txt.gz · Last modified: 2022/11/03 15:27 by kilov