MIT software tool turns everyday objects into animated, eye-catching displays

mit-software-tool-turns-everyday-objects-into-animated,-eye-catching-displays

Whether you’re an artist, advertising specialist, or just looking to spruce up your home, turning everyday objects into dynamic displays is a great way to make them more visually engaging. For example, you could turn a kids’ book into a handheld cartoon of sorts, making the reading experience more immersive and memorable for a child.

But now, thanks to MIT researchers, it’s also possible to make dynamic displays without using electronics, using barrier-grid animations (or scanimations), which use printed materials instead. This visual trick involves sliding a patterned sheet across an image to create the illusion of a moving image. The secret of barrier-grid animations lies in its name: An overlay called a barrier (or grid) often resembling a picket fence moves across, rotates around, or tilts toward an image to reveal frames in an animated sequence. That underlying picture is a combination of each still,

 » Read More

MIT tool visualizes and edits “physically impossible” objects

mit-tool-visualizes-and-edits-“physically-impossible”-objects

M.C. Escher’s artwork is a gateway into a world of depth-defying optical illusions, featuring “impossible objects” that break the laws of physics with convoluted geometries. What you perceive his illustrations to be depends on your point of view — for example, a person seemingly walking upstairs may be heading down the steps if you tilt your head sideways

Computer graphics scientists and designers can recreate these illusions in 3D, but only by bending or cutting a real shape and positioning it at a particular angle. This workaround has downsides, though: Changing the smoothness or lighting of the structure will expose that it isn’t actually an optical illusion, which also means you can’t accurately solve geometry problems on it.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a unique approach to represent “impossible” objects in a more versatile way. Their “Meschers” tool converts images and 3D models into 2.5-dimensional structures,

 » Read More

How repetition helps art speak to us

how-repetition-helps-art-speak-to-us

Often when we listen to music, we just instinctually enjoy it. Sometimes, though, it’s worth dissecting a song or other composition to figure out how it’s built.

Take the 1953 jazz standard “Satin Doll,” written by Duke Ellington and Billy Strayhorn, whose subtle structure rewards a close listening. As it happens, MIT Professor Emeritus Samuel Jay Keyser, a distinguished linguist and an avid trombonist on the side, has given the song careful scrutiny.

To Keyser, “Satin Doll” is a glittering example of what he calls the “same/except” construction in art. A basic rhyme, like “rent” and “tent,” is another example of this construction, given the shared rhyming sound and the different starting consonants.

In “Satin Doll,” Keyser observes, both the music and words feature a “same/except” structure. For instance, the rhythm of the first two bars of “Satin Doll” is the same as the second two bars,

 » Read More

Have a damaged painting? Restore it in just hours with an AI-generated “mask”

have-a-damaged-painting?-restore-it-in-just-hours-with-an-ai-generated-“mask”

Art restoration takes steady hands and a discerning eye. For centuries, conservators have restored paintings by identifying areas needing repair, then mixing an exact shade to fill in one area at a time. Often, a painting can have thousands of tiny regions requiring individual attention. Restoring a single painting can take anywhere from a few weeks to over a decade.

In recent years, digital restoration tools have opened a route to creating virtual representations of original, restored works. These tools apply techniques of computer vision, image recognition, and color matching, to generate a “digitally restored” version of a painting relatively quickly.

Still, there has been no way to translate digital restorations directly onto an original work, until now. In a paper appearing today in the journal Nature, Alex Kachkine, a mechanical engineering graduate student at MIT, presents a new method he’s developed to physically apply a digital restoration directly onto an original painting.

 » Read More

Animation technique simulates the motion of squishy objects

Animators could create more realistic bouncy, stretchy, and squishy characters for movies and video games thanks to a new simulation method developed by researchers at MIT.

Their approach allows animators to simulate rubbery and elastic materials in a way that preserves the physical properties of the material and avoids pitfalls like instability.

The technique simulates elastic objects for animation and other applications, with improved reliability compared to other methods. In comparison, many existing simulation techniques can produce elastic animations that become erratic or sluggish or can even break down entirely.

To achieve this improvement, the MIT researchers uncovered a hidden mathematical structure in equations that capture how elastic materials deform on a computer. By leveraging this property, known as convexity, they designed a method that consistently produces accurate, physically faithful simulations.

Wiggly gummy bears » Read More

Hybrid AI model crafts smooth, high-quality videos in seconds

hybrid-ai-model-crafts-smooth,-high-quality-videos-in-seconds

What would a behind-the-scenes look at a video generated by an artificial intelligence model be like? You might think the process is similar to stop-motion animation, where many images are created and stitched together, but that’s not quite the case for “diffusion models” like OpenAl’s SORA and Google’s VEO 2.

Instead of producing a video frame-by-frame (or “autoregressively”), these systems process the entire sequence at once. The resulting clip is often photorealistic, but the process is slow and doesn’t allow for on-the-fly changes. 

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Adobe Research have now developed a hybrid approach, called “CausVid,” to create videos in seconds. Much like a quick-witted student learning from a well-versed teacher, a full-sequence diffusion model trains an autoregressive system to swiftly predict the next frame while ensuring high quality and consistency. CausVid’s student model can then generate clips from a simple text prompt,

 » Read More