Pages

Saturday, June 11, 2011

The Self-driving Car

The Future of the Self-Driving Car

All over the world engineers and computer scientists are in full swing, trying to produce some of the most amazing cars. Among these is the driverless car…and exactly as the name indicates, no one needs to drive it. Using a massive array of different sensors to judge when to turn, stop, accelerate, indicate and break, the driverless car navigates itself through any street it comes across, and previous tests show that scientists may be close to success. That’s the exciting news. 

So what’s the bad news? Driverless cars are supposedly banned in many countries so testing and selling of this concept isn’t going to be quite as fast as we hope. And there’s another thing too. How accurate can a machine be at making decisions? There are many circumstances where driverless cars could save lives, but there are also occasions where we must make quite difficult, spontaneous decisions, which a machine may be unable to make in the required time, especially during early stages of deployment. A lot of testing will be required to perfect this system. Nonetheless though, it’s coming….and it’s coming very soon. And the cars aren’t all that slow either.

Why have a self-driving car?

 

You may have had plenty of moments when you’ve gone out with friends and needed to take a taxi, or a bus, or a train to get home. You may need to be picked up by family. Or maybe you’ve been to the pub and had something to drink and so can’t drive home. You probably would much rather the comfort of your own car than being stuck in a jam-packed train carriage or wait in the freezing cold not knowing when the next bus would come.
Well, if you just took out your mobile, accessed an app that connected to your car’s built in computer system and instructed your car to drive over and pick you up then all would be sorted. It pulls up beside you, you get in, and you sit back and relax as it takes you safely home.
No need to touch the steering wheel. In fact…do you need to do anything? It’s like your own personal taxi service. Would you even need a driver’s licence? 

Mobility on Demand
As the first video showed, as a taxi service, this could be very useful. You may not even need to buy a car. There could just be a pool of many cars in a garage somewhere in the centre of town and you could just call them up and choose your destination and it will come and whisk you off. There is a name for this.
MIT (Massechussets Institute of Technology) have called it Mobility on Demand. Why have MIT given it a name? Well, because MIT are in fact working on their own prototype of a driverless car, which they name CityCar. Amazingly, they predict they can get the first cars in to commercialisation by 2012. On top of that, they expect to sell 100,000 by 2014. No doubt they will come at a very high price and will be quite slow in their first versions.
However, to understand why this concept is coming so soon, it’s important to know what CityCar is and what it’s aims are.


These plans hope to greatly reduce traffic congestion because the computer in the car can take into account traffic to continuously calculate the best possible routes. Then, this could be taken to an even greater extent whereby the cars can actually communicate through signals with each other and with a big server system located somewhere away from the city to find out, in real time, when accidents occur or where there is traffic build up for example, to dramatically reduce traffic congestion.
Parking also appears to be highly improved if the cars can just reduce their length to fit into tight spaces. And, as the wheels are controlled robotically, the car can turn in ways it can't for a normal car. It can turn on the spot 360 degrees rather than having to be turned through a U-turn.

How is it done?
This technology is all possible now and we know how it’s down. It just has to be tested and improved until it is extremely, and hopefully 100% accurate (although some may say that that’s never possible).


Data is picked up from radar, scanners, cameras, laser scanners and using GPS. Motion sensors, object recognition and heat sensors are integrated into more accurate driverless cars. The data put together is mapped to form an overall image of the area and any surrounding obstacles. An image can be produce such as the one below to represent this information.


Google’s attempt for a driverless vehicle uses Google Street View in fact. Below, the google car has a 360 degree view of the area by using a camera on top of the car that spins round incredible fast:


Google has been lobbying as of April this year for the Nevada State Government to legalise driverless cars and to be the first US state to do so. This would be great for Google as they could finally test legally on the streets on a much grander scale and start selling their driverless cars. Around 8 months ago, their cars were reported as having gone 140,000 miles with only one accident – it knocked into the car in front when stopping at a traffic light. The man trying to push it through – Sebastian Thrun suggests how much more we could do with our car journeys, perhaps from watching videos, playing games, even exercising or maybe just working. The presentations to the Nevada government will continue till mid-June when the verdict will be given.

The ethical, personal and economic issues:
The Google Car aims to keep drivers in control by suggesting that the driver still mainly controls the car. The self-driving function can simply be used when the driver needs to answer the phone for example or perhaps on a motorway or in traffic. However, some companies wish for the car to drive by itself without any need for driver interaction at all. The issue here is about machine breakdowns for example because, after all, we have computers crashing or machines failing all the time in our lives. The problem is, on the roads it can be deadly. Is it really worth placing our lives in the hands of machines when on the roads? Maybe. Accuracy is key and many designers will probably ensure that if something goes wrong, a backup system in the car brings the driver to safety.
Personal preference brings up a lot of debate too. Who wants the same car as everyone else when out an about? Many people don’t care, especially if they are don’t drive much. There are many car fanatics out there. Wouldn’t allowing driverless cars on the road mean that non-driverless cars will be banned? Or can they work together? Can we have both types on the roads? If not, what would happen to people like Jeremy Clarkson? What do people who actually like driving say about this?

Economic issues come into play too. Why buy a car if there are plenty of driverless cars on demand in the streets which you can just hop in, use an oyster card like thing to pay for them and drive off for a while? You may buy a driverless car to benefit from the different colours, specifications and models but is it really going to be worth it? Won’t it cost far more than necessary? The car manufacturers could suffer greatly even if they do produce their own driver-less cars and many countries benefit from car production...so that’s a lot of economic problems and a massive shift required in industries.


When will we see these cars?
Very soon. Very soon indeed. Google and MIT reckon a couple of years. International organisation Bosch reckons within the next ten years. Who’s right? Well perhaps both. We may be seeing driverless cars used in city environments to take people around a little bit like trams at first...or maybe more like the Boris Bikes in London. A few years may pass before we witness any private ownership of driverless cars so it will be a while before you have one in your garage. And it will probably cost a fair deal too.


More information:
This is a very interesting video to view. Look up the guy and the google project. And look up about MIT CityCar. The results are pretty interesting. Keep updated on the lobbying too on the Nevada government which is due to end this month.

Saturday, June 4, 2011

Digital Scent

Digital Smell AKA Smell-o-vision
You’re walking through the bustling market streets of a foreign city, divine smells wafting through the air, mouth watering. Out comes your camera, and you take a few pictures here and there of the stalls and the food to send back to friends and family. The snapshots come out great and you send them straight from your camera. There’s a problem though. Family and Friends receive your photo and just smile because they think it’s an ok photograph. Unlike you, they have not received the full atmosphere. They have merely seen a 2D snapshot in time.
But…what if your holiday pictures were more than just images? What if they could capture scent too? What if you could then send the amazing spice and curry smells with the picture? On the other side, your family opens the picture on their mobile, their computer or their tablet, and the extraordinary smell fills the air around them. Extraordinary and truly immersive.


Approximately three-quarters of our emotions are affected by smell. That means smell is a crucial part in our everyday lives, makings us grumpy, happy, sad, panicked…
The question is, is it really possible to trade and send smells electronically? For example, could you really just simply download perfume or deodorant straight off the Internet?
Yes….and here’s why:

What are Smells?

We perceive smells and odours. Anything that we can perceive to smell is called an Odorant, made up of a chemical compound. In other words, whatever is giving off the scent is releasing very light, and volatile (i.e. easily evaporating) molecules that are carried through the air to your nose. All chemical compounds have slightly different smells. Examples include, “Octyl Acetate” which can create an odour that smells like an Orange while Nerolidol creates a scent much like fresh bark.

How do we detect smells?
The term for detecting smells is Olfaction. Now naturally, just like touch or taste or any other sense for that matter, special cells known as sensory cells are required to detect these odours. These cells are found in their greatest quantity at the back of the nose in a large air space called the Nasal Cavity.
These are the cells responsible to send messages to the brain when molecules of an odour are detected. Unfortunately, as of yet, there is still no completely accurately proven theory as to how these cells decide and perceive the smell of the molecules. Many theories exist though. It is considered the brain could have a chemotopic map to identify and code chemical compounds once the molecules have been broken down into their individual components.

What does this mean?
Enough about that. Here’s the problem now. Any technologically advanced device wishing to enable trading and sending of smells electronically should be able to understand chemical compounds and perceive smells. If we don’t know how the brain does it exactly, how can we create a device that can detect and produce any smells. We would otherwise need hundreds and hundreds of different cartridges full of different aromas and fragrances, just like this SMELLIT concept demonstrates.

This therefore severely limits our current ability to produce such a technology.
However, there are ways we can currently push this technology forwards even without fully knowing how the brain understands smells. For example, Takamichi Nakamoto has designed the Odour Recorder, which detects a smell such as that of an apple and then mixes chemicals and releases the resulting compound to try and mimic the smell exactly. This is one step closer in terms of improving our grounds in digital scent.


Digital Scent – The Successes and the (mostly) Failures

We have already used scents to enhance film, images and music for many decades. In fact, some believe it was even used over a century ago. In 1929, throughout a showing of a production called “The Broadway Melody” in the theatres in New York, a perfume was sprayed from the ceiling. Similar actions were used throughout the 1930s and 40s though it was discovered that removing the smells took a lot longer than expected and could end up sticking to furniture too. A further problem was identified. How could you spread the smell to everyone watching a film or theatre production in enough of a quantity for people to notice? It was possible but also meant that the smell lingered in the air for so long that when the next smell was released, viewers were confused because of the multiple smells.
Other attempts at bringing smell to the cinema failed because no-one was really interested and costs were simply too high to equip all seats with devices to release the smells. Many people could not smell the scents once again around the cinema and an annoying hissing noise was present when smells were released which detracted from the viewing too. People with colds were not benefiting either. Such films include ‘The Scent of Mystery’ and the more successful travelogue film of China, ‘AromaRama’.
Smell-o-vision in the cinema was named one of the worst ideas of all time in a 2000 survey. Shame.

But then the real research came along.


Before any successes a lot more attempts were still made to try and create a method of combing smell with other media. DigiScents tried out an idea in 2000 called iSmell which was a device that connected to the computer with a USB. They even created a second version for Mac too. It contained a cartridge with 128 odours that could emit a smell when an e-mail was opened or website visited for example. It had thousands of common odours all encoded into small digitalised files.
Sadly, it got named one of the worst Tech Products of all time. The company went bankrupt.

Still not looking great for smell, is it?
TriSenx developed a concept and idea called the ‘Scent Dome’ which aimed  at releasing up to 60 different distinct smells. It could release smells linked to an e-mail. Once again, a failure. 

Other experiments, mainly in Japan, intended to investigate if scent creation would enhance user experiences on the computer. Again, they failed to an extent. The technologies just didn’t create a nice immersive experience.
Here is one idea that may prove there still could be a future for smell-o-vision:

Scentcom among other companies are still hard at work trying to revolutionise this technology and one day they may just do it. Let’s see though, what can we expect from the future of this technology:
The Future of Digital Scent:
 Ok, so things may start to be turning around. Japanese researchers reckon they can get digital smell to every Tv set by 2020. I believe now, without a shadow of a doubt, that will be possible. Here is more evidence that progress is being made too:


TV
When watching films or normal tv, particular sounds or clips could be combined with smell so that the appropriate scents are released from a small speaker-like device on or next to the TV. This could really enhance viewing. You could smell the frying and baking of food on a cooking show or smell the smoke of an evil villain’s cigar.

Gaming and PC

ScentScape is a very recent product which could really bring digital scent to homes. It promises to add a more immersive experience to gaming and films. Once again, it connects via USB and contains a cartridge with 20 basic scents. 200 hours of heavy use at least it claims each cartridge lasts. And then new cartridges can be purchased. Just like a printer. ScentEditor enables users to create home videos with scents added which are then played through ScentScape. Furthermore, there is a SDK with it which means that professional programmers can code in C++ to create new applications that can be used with ScentScape. This is the first true move into the future of digital scent. A short, unclear, clip of it can be seen in its working state here:


Oceans and forest smells could be released for example would could be quite a spectacular experience.
It could greatly enhance virtual reality games too, providing gameplay that would actually make you feel much more like you are part of the game.
 Education
Students could potentially learn words and phrases faster if a smell was associated with them. For example, an orange could be shown with its associated smell to a student and they would remember the smell and the image together as being an “Orange”.
                                    
Scent Marketing
Advertisers could create posters which release specific smells or ensure that their store promotions or internet adverts have an associated smell attached to them. For example, a fastfood company advertising chips and burgers could ensure that all their adverts release real (although they’d probably be modified) smells of their foods to entice people to go and purchase from them.  

 

Cameras
Capture a holiday scene and send it to your PC. You could also send it to portable devices if these devices contained small enough cartridges of smells or if they were then connected up to PCs and digital smell devices themselves.

Everyday situations:
You could have scent cartridges in clothes so that if your SMART clothing senses that you are in a bad mood, the relaxing aromas will be emitted from small button sized holes.
Relaxing aromas can also be emitted in vehicles to keep people happy and perhaps smells to keep people awake and alert on the roads too.
Could be used in Medical research to help catch signs of Neuro-degeneration early.
There’s so many possibilities…it’s just how long it takes to make them possible. I think it could be in the next five years that the massive growth of Digital Smell Devices happen!

Read more:
                                   

Saturday, May 28, 2011

The Morph Concept - The future of Screens

The Morph Concept

Here’s a cool video to start with and trust me, it is worth watching from beginning to end. It identifies some of the most exciting aspects of the morph concept in the future:



Almost every technology company designing portable electronics hopes that one day, their mobile devices can by stretched and transformed in mere seconds to serve an entirely new purpose.
For example, you take your TV screen off its stand, push on the sides and suddenly it can be shrunk to the size of a mobile phone so you can take it with you to work or school. Then, when you’re on the train, you can pull on the corners and expand it again to a tablet size so you can use the Internet and touchpad. 
This is the morph concept – the stretching and collapsing of gadgets to any size that the material allows. 



Why have stretchable screens?

- So many more tasks can be carried out by the individual device – no need to buy many separate devices.
- The device is portable
- The device is light.
- The device can grow to be more personal and customisable.
Though these are some major benefits, perhaps the greater benefits can be seen when looking at the technology that may be required to create the screens. Nanotechnology.
The following video, Nokia’s Morph Concept designs, shows just how beneficial nanotechnology and flexible screens can be.



Nano-technology?
Nanotechnology is a truly amazing discovery that could enable resizable screens, among other concepts, to become a reality. The particles and structures involved in nanotechnology are extremely small (measured in nano-metres. One nanometre is one thousand-millionth of a metre.). This gives them some key qualities that traditional materials for electronic devices – e.g. silicon – do not have on their own.
What qualities are those? Silicon is rigid. Like wood, it won’t bend easily at its normal thickness. However, anything thin enough can be bend and folded, for example paper. Nano-materials can be folded. So, if silicon could be much thinner, then perhaps the flexibility factor could be introduced.


 

Alternatively though, research has shown that small silicon components could be put into the electronic device, and be connected by hundreds of nano-wires (extremely thin wires). So, although the silicon components would not be bendable themselves, the nano-wires in-between would enable the components to be stretched apart from eachother. The image below helps show this. Small chips are connected by flexible material, that, when pulled taut, could indeed mean the screen (on a macro-scale), could be stretched.



When can we expect these devices?
The technology research is still far from showing that the morph concepts are possible. We may have to wait quite a while for this to become a reality.
Nonetheless though, despite stretchable electronics not coming any time soon, flexible screens are certainly about to hit the market. These are screens which can be folded and bent, the key difference being that they don’t need to be stretched at all. And neither do they require the use of nanotechnology. OLEDs are the key to flexible screens. More to come on that next week though.


Other exciting morph concepts:
MOBILE SCRIPT:


WINDOW PHONE: 

Read more:
Here's a good place to start

Sunday, May 22, 2011

Haptic Technology - The Future of Touch

Intro to The Technology

Haptic Technology will revolutionize touch screens forever. Why? Because it will finally give images a physical texture. Wood, stone, plastic, metal, crevasses, cracks, ridges, grass, water, wool, cotton, paper...

If there's an image of an object on a screen, whatever its real life texture, you will be able to feel it. Read on to find out more:





Haptics


Imagine a google image of a piece of a wood on a standard touch screen. Now this is just an image, right? A 2D object formed from thousands of pixels. But what if, by using the touch-screen, you could feel the wood shown, every grain, every hole, every ridge. What if it felt hard rubbing against your finger tips and you could feel the friction against it. And then, seconds later, you could have another material shown on the screen, perhaps wool or a brush for example, which would feel much different, much softer.



This is one thing Haptic Technology aims to achieve - a revolution in touch technology. 

Haptics derives from the greek words for 'sense of touch'. The idea is that some sort of physical response is generated by the device, whether its through vibrations, electric current or mechanical movement of the screen to create a feeling of different textures or layers. Some companies look to create haptic touch screens that could replace computer keyboards. Computer keyboards are commonly preferred to touch keyboards on IPads for example because of the satisfying click and texture of the keys against fingertips when pressed. One attempt to counter the lack of this on touch screens has been providing vibrations whenever a button is pressed. For many, this is not good enough and just makes it even more difficult to type accurately in some cases.


Haptics could enable all letters on the touch keyboard to feel like real keys, with clearly defined, feel-able edges without need for large vibrations.



However, physical responses are not limited to touch through finger tips on a touch screen. They could involve vibrations for any part of the body. For example, you could have a small electronic pad that slots into your shoes beneath the soles of your feet that applies pressure to various parts of the underside of your foot. If you are controlling a virtual character, moving across uneven ground in a video game, you could feel the rock and gravel the avatar runs across, press into your feet in real life, creating an even more immersive experience. Scary stuff, eh?

One company actually looks at a system whereby people can exchange hugs on a social network or online game, and feel it in real life. Imagine playing the Sims 5, whereby your main character is hugged or hit by a computer character and you can actually feel it as you play.



Near Future Uses


- Touch screen to feel buttons
- Laptop and phone touch keyboards to improve typing accuracy
- Video Game Consoles (Potentially being included into the new Wii 2 controller though more will be discovered about this on June 7th)
-e-paper - reassuring page turning for the user and the paper will actually feel like normal paper.
- for visually impaired people e.g. using braille on touch screens which can actually be felt by the user. 


Far Future Uses

- An interesting one is the integration of Haptics into Holographics - Being able to feel a 3D holographic image as though it actually exists.

- Computer Modelling

- Virtual Reality

- Online Shopping - You are able to feel the texture of furniture before you buy it - different sofa textures for example. 

- Medical science - Doctors being able to feel the patient's body temperature and skin through robots from thousands of miles away. Also, training nurses what it feels like to dissect, operate and inject without needing to be next to a body at all. 



- Training in other areas too - Soldiers training by being able to feel the full impact of a grenade without actually being hurt, diffusing bombs and so on...





How it works?

Touching a sofa and clothing or touching stones and bricks will provide very different feedback to the brain. The receptors in the skin, proprioceptors, gather the information about the object being touched and carry the information to the Cerebral Cortex in the Brain where info is processed. 

The skin is covered in complex patterns of receptors, some detecting pressure, others vibrations, temperature, edges, softness, pain...They work together to form an detailed picture in the brain of whatever is being touched. This feedback is known as FORCE FEEDBACK.

Using sensors, actuators and other electronics, haptic technology aims to recreate this Force feedback digitally. Methods include providing small vibrations, pulses, and the latest - a small electric current creating a safe electric field. Apple are planning on using hot and cold air upward blasts in their keyboards to create force feedback before the user ever touches the keys so they can make their keyboard even thinner. (More info on this coming soon). There are also hints from patents and reports that Apple may too be working on integrating true haptic technology into their touch screen devices.

The possibilities are seemingly endless so no doubt the integration of this technology and research into it will continue for decades. The revolution has already begun in the form of vibrations but I predict it will improve at an incredible rate after about 2013, especially if the rumours about the Wii 2 are true.



Read More: 

If you want to know more: 

Immersion is a leading researcher and developer in this industry. Their site tells you a lot of information about haptics and its uses: