Charts can be social artifacts that communicate more than just data

charts-can-be-social-artifacts-that-communicate-more-than-just-data

The degree to which someone trusts the information depicted in a chart can depend on their assumptions about who made the data visualization, according to a pair of studies by MIT researchers.

For instance, if someone infers that a graph about a controversial topic like gun violence was produced by an organization they feel is in opposition with their beliefs or political views, they may discredit the information or dismiss the visualization all together.

The researchers found that even the clearest visualizations often communicate more than the data they explicitly depict, and can elicit strong judgments from viewers about the social contexts, identities, and characteristics of those who made the chart.

Readers make these assessments about the social context of a visualization primarily from its design features, like the color palette or the way information is arranged, rather than the underlying data. Often, these inferences are unintended by the designers.

 » Read More

New software designs eco-friendly clothing that can reassemble into new items

new-software-designs-eco-friendly-clothing-that-can-reassemble-into-new-items

It’s hard to keep up with the ever-changing trends of the fashion world. What’s “in” one minute is often out of style the next season, potentially causing you to re-evaluate your wardrobe.

Staying current with the latest fashion styles can be wasteful and expensive, though. Roughly 92 million tons of textile waste are produced annually, including the clothes we discard when they go out of style or no longer fit. But what if we could simply reassemble our clothes into whatever outfits we wanted, adapting to trends and the ways our bodies change?

A team of researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Adobe are attempting to bring eco-friendly, versatile garments to life. Their new “Refashion” software system breaks down fashion design into modules — essentially, smaller building blocks — by allowing users to draw, plan,

 » Read More

MIT software tool turns everyday objects into animated, eye-catching displays

mit-software-tool-turns-everyday-objects-into-animated,-eye-catching-displays

Whether you’re an artist, advertising specialist, or just looking to spruce up your home, turning everyday objects into dynamic displays is a great way to make them more visually engaging. For example, you could turn a kids’ book into a handheld cartoon of sorts, making the reading experience more immersive and memorable for a child.

But now, thanks to MIT researchers, it’s also possible to make dynamic displays without using electronics, using barrier-grid animations (or scanimations), which use printed materials instead. This visual trick involves sliding a patterned sheet across an image to create the illusion of a moving image. The secret of barrier-grid animations lies in its name: An overlay called a barrier (or grid) often resembling a picket fence moves across, rotates around, or tilts toward an image to reveal frames in an animated sequence. That underlying picture is a combination of each still,

 » Read More

A shape-changing antenna for more versatile sensing and communication

a-shape-changing-antenna-for-more-versatile-sensing-and-communication

MIT researchers have developed a reconfigurable antenna that dynamically adjusts its frequency range by changing its physical shape, making it more versatile for communications and sensing than static antennas.

A user can stretch, bend, or compress the antenna to make reversible changes to its radiation properties, enabling a device to operate in a wider frequency range without the need for complex, moving parts. With an adjustable frequency range, a reconfigurable antenna could adapt to changing environmental conditions and reduce the need for multiple antennas.

The word “antenna” may draw to mind metal rods like the “bunny ears” on top of old television sets, but the MIT team instead worked with metamaterials — engineered materials whose mechanical properties, such as stiffness and strength, depend on the geometric arrangement of the material’s components.

The result is a simplified design for a reconfigurable antenna that could be used for applications like energy transfer in wearable devices,

 » Read More

Unpacking the bias of large language models

unpacking-the-bias-of-large-language-models

Research has shown that large language models (LLMs) tend to overemphasize information at the beginning and end of a document or conversation, while neglecting the middle.

This “position bias” means that, if a lawyer is using an LLM-powered virtual assistant to retrieve a certain phrase in a 30-page affidavit, the LLM is more likely to find the right text if it is on the initial or final pages.

MIT researchers have discovered the mechanism behind this phenomenon.

They created a theoretical framework to study how information flows through the machine-learning architecture that forms the backbone of LLMs. They found that certain design choices which control how the model processes input data can cause position bias.

Their experiments revealed that model architectures, particularly those affecting how information is spread across input words within the model, can give rise to or intensify position bias,

 » Read More

LLMs factor in unrelated information when recommending medical treatments

llms-factor-in-unrelated-information-when-recommending-medical-treatments

A large language model (LLM) deployed to make treatment recommendations can be tripped up by nonclinical information in patient messages, like typos, extra white space, missing gender markers, or the use of uncertain, dramatic, and informal language, according to a study by MIT researchers.

They found that making stylistic or grammatical changes to messages increases the likelihood an LLM will recommend that a patient self-manage their reported health condition rather than come in for an appointment, even when that patient should seek medical care.

Their analysis also revealed that these nonclinical variations in text, which mimic how people really communicate, are more likely to change a model’s treatment recommendations for female patients, resulting in a higher percentage of women who were erroneously advised not to seek medical care, according to human doctors.

This work “is strong evidence that models must be audited before use in health care — which is a setting where they are already in use,” says Marzyeh Ghassemi,

 » Read More

Researchers present bold ideas for AI at MIT Generative AI Impact Consortium kickoff event

researchers-present-bold-ideas-for-ai-at-mit-generative-ai-impact-consortium-kickoff-event

Launched in February of this year, the MIT Generative AI Impact Consortium (MGAIC), a presidential initiative led by MIT’s Office of Innovation and Strategy and administered by the MIT Stephen A. Schwarzman College of Computing, issued a call for proposals, inviting researchers from across MIT to submit ideas for innovative projects studying high-impact uses of generative AI models.

The call received 180 submissions from nearly 250 faculty members, spanning all of MIT’s five schools and the college. The overwhelming response across the Institute exemplifies the growing interest in AI and follows in the wake of MIT’s Generative AI Week and call for impact papers. Fifty-five proposals were selected for MGAIC’s inaugural seed grants, with several more selected to be funded by the consortium’s founding company members.

Over 30 funding recipients presented their proposals to the greater MIT community at a kickoff event on May 13.

 » Read More

Bringing meaning into technology deployment

bringing-meaning-into-technology-deployment

In 15 TED Talk-style presentations, MIT faculty recently discussed their pioneering research that incorporates social, ethical, and technical considerations and expertise, each supported by seed grants established by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing. The call for proposals last summer was met with nearly 70 applications. A committee with representatives from every MIT school and the college convened to select the winning projects that received up to $100,000 in funding.

“SERC is committed to driving progress at the intersection of computing, ethics, and society. The seed grants are designed to ignite bold, creative thinking around the complex challenges and possibilities in this space,” said Nikos Trichakis, co-associate dean of SERC and the J.C. Penney Professor of Management. “With the MIT Ethics of Computing Research Symposium, we felt it important to not just showcase the breadth and depth of the research that’s shaping the future of ethical computing,

 » Read More

Study shows vision-language models can’t handle queries with negation words

study-shows-vision-language-models-can’t-handle-queries-with-negation-words

Imagine a radiologist examining a chest X-ray from a new patient. She notices the patient has swelling in the tissue but does not have an enlarged heart. Looking to speed up diagnosis, she might use a vision-language machine-learning model to search for reports from similar patients.

But if the model mistakenly identifies reports with both conditions, the most likely diagnosis could be quite different: If a patient has tissue swelling and an enlarged heart, the condition is very likely to be cardiac related, but with no enlarged heart there could be several underlying causes.

In a new study, MIT researchers have found that vision-language models are extremely likely to make such a mistake in real-world situations because they don’t understand negation — words like “no” and “doesn’t” that specify what is false or absent. 

“Those negation words can have a very significant impact, and if we are just using these models blindly,

 » Read More