Cs TeChNoCrats
Friday, 31 January 2014
Monday, 16 September 2013
Smartphones can identify you by your touch and Gesture...
Move over, passwords. US-based scientists have developed a new software that can enable your touchscreen and Gesture smartphone to identify you simply by the unique way you tap and swipe on the device.
The software, Silent-Sense , developed by Cheng Bo and his colleagues at the Illinois Institute of Technology, has demonstrated 99% accuracy in tests.
It uses the phone's built-in sensors to record the unique patterns of pressure, duration , fingertip size and position of each user when interacting with their phone or tablet, Bo said.
Machine learning algorithms then turn this information into a signature that identifies the user — and will lock out anyone whose usage patterns do not match, New Scientist reported.
The system's accuracy can be further enhanced by enabling the smartphone's accelerometer and gyroscope to measure how much the screen moves when you are jabbing at it.
Gesture View:-
The software 'SilentSense', developed by Cheng Bo and his colleagues at the Illinois Institute of Technology, has demonstrated 99 percent accuracy in tests.
It uses the phone's built-in sensors to record the unique patterns of pressure, duration and fingertip size and position each user exhibits when interacting with their phone or tablet.
Machine learning algorithms then turn this into a signature that identifies the user - and will lock out anyone whose usage patterns do not match.
The system's accuracy can be further enhanced, by enabling the smartphone's accelerometer and gyroscope to measure how much the screen moves when you are jabbing at it.
They can also pick up on your unique gait as you walk while using the screen. In tests, 100 users were told to use the smartphone's touch screen as they would normally.
SilentSense was able to identify the phone's owner with 99 percent accuracy after no more than 10 taps. Even with an average of 2.3 touches the system was able to verify the user 98 percent of the time.
The software stops checking the user's identity when apps like games are being used. However, to maintain security, it automatically switches on when more sensitive applications, such as email or SMS, are accessed
PCs to 'replace' humans in 20 years?
45% of jobs presently conducted by humans will be handed over to the computers within the next 20 years, a group of futurists have warned.
According to io9.com, researchers, from University of Oxford's James Martin School, believe that the takeover will happen in two stages where jobs in services, sales and construction may be the first to go.
Other vulnerable fields in the first stage include transportation/logistics, production labor, and administrative support.
In the second stage, scientist think that jobs in management, science and engineering, and the arts will be at risk.
Sunday, 1 September 2013
5 Trends That Will Drive The Future Of Technology
Trends get a bad rap, mostly because they are often equated with fashions. Talk about trends and people immediately start imagining wafer thin models strutting down catwalks in outrageous outfits, or maybe a new shade of purple that will be long forgotten by next season.
Yet trends can be important, especially those long in the making. If lots of smart people are willing to spend years of their lives and millions (if not billions) of dollars on an idea, there’s probably something to it.
Today, we’re on the brink of a new digital paradigm, where the capabilities of our technology are beginning to outstrip our own. Computers are deciding which products to stock on shelves, performing legal discovery and even winning game shows. They will soon be driving our cars and making medical diagnoses. Here are five that are driving it all.
1. No-Touch Interfaces
We’ve gotten used to the idea that computers are machines that we operate with our hands. Just as we Gen Xers became comfortable with keyboards and mouses, Today’s millennial generation has learned to text at blazing speed. Each new iteration of technology has required new skills to use it proficiently.
That’s why the new trend towards no-touch interfaces is so fundamentally different. From Microsoft’s Kinect to Apple’s Siri to Google’s Project Glass, we’re beginning to expect that computers adapt to us rather than the other way around.The basic pattern recognition technology has been advancing for generations and, thanks to accelerating returns, we can expect computer interfaces to become almost indistinguishable from humans in little more than a decade.
2. Native Content
While over the past several years technology has become more local, social and mobile, the new digital battlefield will be fought in the living room, with Netflix, Amazon, Microsoft, Google, Apple and the cable companies all vying to produce a dominant model for delivering consumer entertainment.
One emerging strategy is to develop original programming in order to attract and maintain a subscriber base. Netflix recently found success with their “House of Cards” series starring Kevin Spacey and Robin Wright. Amazon and Microsoft quickly announced their own forays into original content soon after.
Interestingly, HBO, which pioneered the strategy, has been applying the trend in reverse. Their HBO GO app, which at the moment requires a cable subscription, could easily be untethered and become a direct competitor to Netflix.
3. Massively Online
In the last decade, massively multiplayer online games such as World of Warcraft became all the rage. Rather than simply play against the computer, you could play with thousands of others in real time. It can be incredibly engrossing (albeit a bit unsettling when you realize that the vicious barbarian you’ve been marauding around with is actually a 14 year-old girl).
Now other facets of life are going massively online. Khan Academy offers thousands of modules for school age kids, Code Academy can teach a variety of programming languages to just about anybody and the latest iteration is Massively Online Open Courses (MOOC’s) that offer university level instruction. (For a good example, see here).
The massively online trend has even invaded politics, with President Obama recently reaching out to ordinary voters through Ask Me Anything on Reddit and Google Hangouts.
4. The Web of Things
Probably the most pervasive trend is the Web of Things, where just about everything we interact with becomes a computable entity. Our homes, our cars and even objects on the street will interact with our smartphones and with each other, seamlessly.
What will drive the trend in the years to come are two complementary technologies: Near Field Communication (NFC), which allows for two-way data communication with nearby devices and ultra-low power chips that can harvest energy in the environment, which will put computable entities just about everywhere you can think of.While the Web of Things is already underway, it’s difficult to see where it will lead us. Some applications, such as mobile payments and IBM’s Smarter Planet initiative, will become widespread in just a few years. Marketing will also be transformed, as consumers will be able to seamless access digital products from advertisements in the physical world.
Still, as computing ceases to be something we do seated at a desk and becomes a natural, normal way of interacting with our environment, there’s really no telling what the impact will be.
5.Consumer Driven Supercomputing
Everybody knows the frustration of calling to a customer service line and having to deal with an automated interface. They work well enough, but it takes some effort. After repeating yourself a few times, you find yourself wishing that you can just punch your answers in or talk to someone at one of those offshore centers with heavy accents.
Therein lies the next great challenge of computing. While we used to wait for our desktop computers to process our commands and then lingered for what seemed like an eternity for web pages to load, we now struggle with natural language interfaces that just can’t quite work like we’d like them to. Welcome to the next phase of computing. As I previously wrote in Forbes, companies ranging from IBM to Google to Microsoft are racing to combine natural language processing with huge Big Data systems in the cloud that we can access from anywhere.
Therein lies the next great challenge of computing. While we used to wait for our desktop computers to process our commands and then lingered for what seemed like an eternity for web pages to load, we now struggle with natural language interfaces that just can’t quite work like we’d like them to.
Welcome to the next phase of computing. As I previously wrote in Forbes, companies ranging from IBM to Google to Microsoft are racing to combine natural language processing with huge Big Data systems in the cloud that we can access from anywhere.
When Computers Disappear
When computers first appeared, they took up whole rooms and required specialized training to operate them. Then they arrived in our homes and were simple enough for teenagers to become proficient in their use within a few days (although adults tended to be a little slower). Today, my three year old daughter plays with her iPad as naturally as she plays with her dolls.
Now, computers themselves are disappearing. They’re embedded invisibly into the Web of Things, into no-touch interfaces and into our daily lives. While we’ve long left behind loading disks into slots to get our computers to work and become used to software as a service – hardware as a service is right around the corner.That’s why technology companies are becoming increasingly consumer driven, investing in things like native content to get us onboard their platform, from which we will sign onto massively online services to entertain and educate ourselves.
The future of technology is, ironically, all too human.
Sunday, 25 August 2013
Tuesday, 20 August 2013
GOOGLE GLASS IN ACTION "HAVE A LOOK"
WHAT IS A GOOGLE GLASS?
Google Glass (styled "GLΛSS") is a wearable computer with an optical head-mounted display (OHMD) that is being developed by Google in the Project Glass research and development project, with the mission of producing a mass-market ubiquitous computer. Google Glass displays information in a smartphone-like hands-free format, that can interact with the Internet via natural language voice commands.
Saturday, 17 August 2013
CHIP THAT CAN MIMIC THE HUMAN BRAIN
WASHINGTON: Scientists, including one of Indian origin, are developing a computer chip that mimics the human brain. Today's computing chips are incredibly complex and contain billions of nano-scale transistors, allowing for fast, high-performance computers, pocket-sized smartphones that far outpace early desktop computers, and an explosion in handheld tablets, the researchers said.Despite their ability to perform thousands of tasks in the blink of an eye, none of these devices even come close to rivalling the computing capabilities of the human brain. But a Boise State University research team could soon change that.
Electrical and computer engineering faculty Elisa Barney Smith, Kris Campbell and Vishal Saxena have taken on the challenge of developing a new kind of computing architecture that works more like a brain than a traditional digital computer.
"By mimicking the brain's billions of interconnections and pattern recognition capabilities, we may ultimately introduce a new paradigm in speed and power, and potentially enable systems that include the ability to learn, adapt and respond to their environment," said Barney Smith, principal investigator of the study.
The project's success rests on a memristor - a resistor that can be programmed to a new resistance by application of electrical pulses and remembers its new resistance value once the power is removed.
Memristors were first hypothesised to exist in 1972 (in conjunction with resistors, capacitors and inductors) but were fully realised as nano-scale devices only in the last decade.
The team's research builds on recent work from scientists who have derived mathematical algorithms to explain the electrical interaction between brain synapses and neurons.
"By employing these models in combination with a new device technology that exhibits similar electrical response to the neural synapses, we will design entirely new computing chips that mimic how the brain processes information," said Barney Smith.
These new chips will consume power at an order of magnitude lower than current computing processors, despite the fact that they match existing chips in physical dimensions.
This will open the door for ultra low-power electronics intended for applications with scarce energy resources, such as in space, environmental sensors or biomedical implants.
Subscribe to:
Posts (Atom)