Top Tech #128: Smart Golf Carts, Nano Bulbs, Sensing Style
Important innovations in science and technology
By Paul Worthington

Wednesday’s Top Tech:
• Self-driving… golf carts
• Nano Light bulb
• Software perceives like you do
Self-driving… golf carts

Okay, so they’re not the autonomous automobiles we’ve been waiting for…
Self-driving golf carts ferried 500 tourists around winding paths trafficked by pedestrians, bicyclists, and the occasional monitor lizard, MIT reports.
The Singapore-MIT Alliance for Research and Technology (SMART) experimented at a large public garden in Singapore with the carts, in which 98 percent of participants “said that they would use the autonomous golf carts again.”
The SMART vehicles take a minimalist approach, with “a simple suite of strategically placed sensors” augmented “with reliable algorithms” for “robust results that require less computation and have less of a chance to get confused by situations where one sensor says one thing and another sensor says something different.”
The golf carts’ sensors consist entirely of off-the-shelf laser rangefinders mounted at different heights.
Nano Light bulb

A nanoscale light-emitting device on a chip “could help usher in a new generation of ultra fast, small, lightweight photonic computers,” Popular Science reports, “which use light instead of an electric current.”
The light source is made with graphene, natch, and almost literally shrinks the traditional Edison light bulb down to atomic size.
Made by a Columbia University researcher, it is “ideal for photonic computing.”
Software perceives like you do

We can perceive stylistic similarity between objects that transcends structure and function: For example, we can see a common style such as “Danish modern” in both a table and chair, though they have different structures, note computer scientists from the University of Massachusetts Amherst.
Computers are so good at that — yet. Now a new modeling program uses geometric matching and machine learning to mimic our human perception of style, and compare the style similarity of three-dimensional objects.
Phys Org reports on it here, and here’s the researchers’ paper.


