Mars orbiter catches pic of Curiosity on its way down! | Bad Astronomy | Discover Magazine

The simple and sheer amazingness of this picture cannot be overstated. Here we have a picture taken by a camera on board a space probe that’s been orbiting Mars for six years, reset and re-aimed by programmers hundreds of millions of kilometers away using math and science pioneered centuries ago, so that it could catch the fleeting view of another machine we humans flung across space, traveling hundreds of million of kilometers to another world at mind-bending speeds, only to gently – and perfectly – touch down on the surface mere minutes later.

The news these days is filled with polarization, with hate, with fear, with ignorance. But while these feelings are a part of us, and always will be, they neither dominate nor define us. Not if we don’t let them. When we reach, when we explore, when we’re curious – that’s when we’re at our best. We can learn about the world around us, the Universe around us. It doesn’t divide us, or separate us, or create artificial and wholly made-up barriers between us. As we saw on Twitter, at New York Times Square where hundreds of people watched the landing live, and all over the world: science and exploration bind us together. Science makes the world a better place, and it makes us better people.

via Mars orbiter catches pic of Curiosity on its way down! | Bad Astronomy | Discover Magazine.

Purely awesome.

Synthetic Biology Incubator Launches

Yes, an incubator! Just for synthetic biology! It’s being hosted at Singularity University in Silicon Valley. Does anyone need any more convincing that there is a huge future in synthetic biology, through big leaps in both innovation and technology? The way we work with biology is changing, evolving, from observation to invention.

Looks like the incubator is providing resources, mentoring, and stipends for their chosen startups. Hopefully it will be like what tech incubators such as Y Combinator have done for computer startups – no doubt that many will fail or have to change their business plans and project ideas, but at the very least, there will be increased coverage and education about what synthetic biology is capable of.

“Teeming with ambitious ideas and some pretty futuristic potential, synthetic biology is an emerging multidisciplinary field in which the principles of genetic engineering are coupled with genome design software to capitalize on the plummeting cost of DNA analysis and synthesis. The approach is to construct artificial biological systems in a similar way that computer chips are made. The result is a broad array of potential technologies that could lead to a radical transformation across a variety of sectors, including medicine.”

There’s a great video at the end of the article that gives an intro to synthetic biology, too.

Source: http://singularityhub.com/2012/05/29/inaugural-synthetic-biology-incubator-synbio-launches-at-singularity-university/

Dan Barber’s Culinary Crusade – WSJ.com

“Butchering and eating animals may not be called kindness, but eating soy burgers that rely on pesticides and fertilizers precipitates destruction too. You don’t have to eat meat, but you should have the good judgment to relinquish the high horse. There is no such thing as guilt-free eating.”

WSJ Soapbox piece on food, sustainability, local diets, and the environment by Dan Barber, chef at Blue Hill at the Stone Barns Center for Food and Agriculture. He discusses his opinions on what is best for humans to eat – best for us in a way that is healthy, and best for the earth in a way that is sustainable and logical – based on nutrient cycles, tracing the energy flow, and the inputs and outputs unique to different areas and their soils, as well as culture and our biological needs. Interesting ideas on “ecological intelligence” and an argument against vegetarianism (it’s always good to hear the reasoning behind both sides!).

“What I don’t like about sustainable foodies—and I’m considered one of them—is that we carry an air of preachiness about food. (No one wants to be told what to eat, whether it’s by your mother or by a group of holier-than-thou chefs.) But true sustainability is about more than just deciding to cook with local ingredients or not allowing your child to have corn syrup. It’s about cuisine that’s evolved out of what the land is telling you it wants to grow. As one farmer said to me, Food systems don’t last; cuisine does.”

Source: Dan Barber’s Culinary Crusade – WSJ.com

Colors and their names

Aatish Bhatia, a PhD student at Rutgers blogging at the Empirical Zeal, gives us a very fascinating read on how naming colors has affected our perception of colors and our visual worlds. A little bit of linguistics, a little bit of color theory, a little bit of visualization, a lot of interesting science. Think about it: we gave colors discrete boundaries and names, but they are more of continuous, fluid things.

Here’s some of the introduction that gets you started on thinking about how we partition color:

“Blue and green are similar in hue. They sit next to each other in a rainbow, which means that, to our eyes, light can blend smoothly from blue to green or vice-versa, without going past any other color in between. Before the modern period, Japanese had just one word, Ao, for both blue and green. The wall that divides these colors hadn’t been erected as yet. As the language evolved, in the Heian period around the year 1000, something interesting happened. A new word popped into being – midori – and it described a sort of greenish end of blue. Midori was a shade of ao, it wasn’t really a new color in its own right.

“One of the first fences in this color continuum came from an unlikely place – crayons. In 1917, the first crayons were imported into Japan, and they brought with them a way of dividing a seamless visual spread into neat, discrete chunks. There were different crayons for green (midori) and blue (ao), and children started to adopt these names. But the real change came during the Allied occupation of Japan after World War II, when new educational material started to circulate. In 1951, teaching guidelines for first grade teachers distinguished blue from green, and the word midori was shoehorned to fit this new purpose.

“In modern Japanese, midori is the word for green, as distinct from blue. This divorce of blue and green was not without its scars. There are clues that remain in the language, that bear witness to this awkward separation. For example, in many languages the word for vegetable is synonymous with green (sabzi in Urdu literally means green-ness, and in English we say ‘eat your greens’). But in Japanese, vegetables are ao-mono, literally blue things. Green apples? They’re blue too. As are the first leaves of spring, if you go by their Japanese name. In English, the term green is sometimes used to describe a novice, someone inexperienced. In Japanese, they’re ao-kusai, literally they ‘smell of blue’. It’s as if the borders that separate colors follow a slightly different route in Japan.”

Source: The crayola-fication of the world: How we gave colors names, and it messed with our brains (part I)

Read part II here.

Genome Compiler is released!

Genome Compiler‘s first public release just came out, as announced by Omri Drory, the founder and CEO of Genome Compiler. It’s a software tool for designing and debugging synthetic DNA, and ordering it, too! Haven’t played around with it yet but I’m excited to (update later after I try it out). Everyone is starting to make big, visible steps towards making synthetic biology more accessible, modular, and highly functional. Hopefully this will lead to many more awesome developments.

The Vertical Forest

“More and more people believe that access to a garden, and to gardening, is a basic human need. But is the answer a traditional house and garden or should we be looking at gardens in the sky?”

Dubbed “Flower Towers” by the Financial Times, these fluffy green buildings designed by architect Stefano Boeri are currently rising in Milan. Photos show that the skeletons of the buildings are up, an intricate maze of balconies and jutting gardens designed to insulate the building, counter air pollution and support reforestation, work towards sustainability, and maintain biodiversity and a functioning ecosystem, all suspended 110 meters into the air. These microclimates ideally would maintain their own energy and water usage and recycling, including using repurposed grey water from the building to feed the plants.

The floor layout with plumbing detail makes the building look like an Escher piece. 900 trees of varying heights and structural types will be used to make a diverse wall, both for the biology of the building and for coating the sides of the buildings more fully. The finished product, which when flattened, is equivalent to 15,000 square meters of land and 10,000 square meters of forest, and is intended to counteract the growth of Milan’s rapid urban expansion.

Eco-cities coming of age?

NYU ITP Spring Show 2012

My roommate, a graduate student at NYU’s Interactive Telecommunications Program (and computer science / engineering / robots / circuitry / electronics / art&design extraordinaire, and overall awesome person), invited me to the ITP Spring Show this year, where ITP students display and demo their projects from classes, independent research, and theses. It’s a two-day showcase, spread out among the classrooms and lab spaces of the ITP floor within NYU’s Tisch School of the Arts. The students are all intensely innovative and creative, but also skilled in the technical background needed to carry out their ideas, and most importantly, very interested in how to connect to people and how to use technology creatively (ITP has proclaimed itself the “Center for the Recently Possible”). The show happens twice a year, at the end of the fall and spring terms.

There were many, many favorites, but here’s a small sampling:

Descriptive Camera, by Matt Richardson

“The Descriptive Camera works a lot like a regular camera—point it at subject and press the shutter button to capture the scene. However, instead of producing an image, this prototype outputs a text description of the scene. Modern digital cameras capture gobs of parsable metadata about photos such as the camera’s settings, the location of the photo, the date, and time, but they don’t output any information about the content of the photo. The Descriptive Camera only outputs the metadata about the content.

“As we amass an incredible amount of photos, it becomes increasingly difficult to manage our collections. Imagine if descriptive metadata about each photo could be appended to the image on the fly—information about who is in each photo, what they’re doing, and their environment could become incredibly useful in being able to search, filter, and cross-reference our photo collections. Of course, we don’t yet have the technology that makes this a practical proposition, but the Descriptive Camera explores these possibilities.”

Cool new way to approach photography, especially as all of your photographs become digitized and trapped onto your spare hard drives. Even analog photos that get printed on film get digitized, as if to save them in some way that seems more permanent to us (even if it isn’t). The camera works by sending the image to workers who have signed up for Amazon’s Mechanical Turk system, which feeds Human Intelligent Tasks (HITs) to other people over the internet, for a fee. The people who get these tasks send back a short descriptive text about the image. The Descriptive Camera makes it a fascinating study in what we see in images that don’t come with any context, and how people choose to tell stories or color what they see with their personal viewpoints.

Here are some descriptions collected from the show, as posted to Matt’s blog:

A thoughtful gentleman in a pink and baby blue plaid shirt stands next to a lovely woman who appears to be impersonating an orangutan.

A woman in a black top looks terrified by the gentleman in a grey shirt who seems to be telling a story about an enormous fish he once caught.

A woman in a black tank top stands in the foreground. Behind her, it appears as though there is a beard convention taking place.

Two men are fighting over the honor of this lady with bangs who is standing the background. She actually looks really excited about this fight. DRAAAAAAMA!

Added bonus: NPR’s Wait Wait…Don’t Tell Me show featured the Descriptive Camera in a recent limerick.

Chairish, Rocking Chair for Two, by Annelie Berner

The Design for Digital Fabrication class produced some great pieces on display made with laser cutters and 3D printers. This shareable chair was whimsical, functional, and beautiful — sit down with a friend and sway in unison on the carefully cut wood frame, which was made on a CNC router.

Rehuddle, by Philip Groman and Robbie Tilton

Rehuddle is a very simple conference-call site that makes it ridiculously easy to set up group calls. You can invite friends through calls, texts, or emails, and you even get those iconic little Turntable.fm avatars.

“Our target audience is small creative businesses that cannot afford expensive phone systems and are looking for a free, easy and fun solution. Our secondary target market is for anyone looking to speak and share information with two or more people.” – Project Description

BurritoBot, by Marko Manriquez

Laser cutting and 3D printing come together to yield…the evasive perfect burrito. The evasive perfect 3D printed burrito.

“Burritob0t is a platform for rapid prototyping and tracing the source of food in our lives to reveal hidden issues revolving around fast food: labor practices; environmental consequences; nutritional values. Mexican fast food is emblematic of the assembly line, mass produced era of modern consumables – appropriating the authenticity of the ethnic food sensibility it purports to embody while masquerading as an edible like substance.  Because the burrito is a mass market consumable, it lends easily as a way for examining and stimulating discussion on various aspects of the food industry including: how and where our food is grown, methods of production, environmental impact, cultural appropriation and perhaps most importantly – what our food means to us. By parodying the humble burrito’s ingredients and methods of production we can shed light on these exogenous factors and interconnected systems surrounding the simple burrito.”

Galapagos, by Ann Chen and Danne Woo

This is a typeface designed with the help of genetic algorithms. As someone who has worked on evolutionary biology and spent a lot of time looking at these sort of patterns and trees, it was particularly cool to see how people found them both beautiful and inspirational for design, especially as the principles of evolution and feedback were incorporated into each iteration of a font:

“User Scenario: We will have the program set up on an iPad. User approaches iPad, directions on how to begin generating typeface will be clearly presented. When user generates first evolution of the typeface, they have the option of either printing and saving what they’ve created or creating another generation. The characteristics of the next font generation (color, shapes, size, etc.) can be determined by the user depending on how long they hover over each example. The longer they hover over one, the higher ranked that letter’s characteristics will be and the more likely the next generation will look like that character. User saves the print and can email print to themselves.”

Dinosaur Treasures, by Anh Ly and Ji Hyun Lee

Who doesn’t love digging around in a sandbox? Let’s be real. This highly interactive piece encouraged the curious to pretend to be an archaeologist and hunt for dinosaur fossils…yep.

One part that seemed to capture everyone’s interest was how the sensors could tell how deep you were digging — the idea of depth adding a new element to your interactive, 3D space archaeology adventure. The sensors could tell where you were digging and how far down you were going, and dinosaurs (and lobsters) would pop up accordingly. I thought it would be pretty awesome to have an underwater deep-sea explorer version of this, like putting on a suit and floating around in a pool that gets magnified into a giant ocean? And whales and squids come at you? Is that too crazy?

Call Your Sequencer, by Byung Han Lim and Dong Ik Shin

This seemed complicated from far away, but you’re quickly drawn in by the visual/audial mix going on, and a bunch of very concentrated people staring at the screen, bobbing their heads, and poking their cell phones. A group of users calls up a phone number, each on their own cell phone. Once you get into the system, your number shows up on the grid, a whole row of cubes to yourself as your own personal 8-step music sequencer — within the 8-member-maximum band. You can control your beat and rhythm, which cycles automatically, by pressing numbers on your own phone’s keypad, which turns the steps on and off and animates (or stills) the corresponding cube. Once you get more comfortable, you can mix in the ability to change the pitch and instrument with the pound and star keys, and the flashing colors synching with your inner dial-pad-music-genius.