Archive for the ‘TechStuff’ category

Quillpad – Tamil Translator – cool tool

June 20th, 2007
|  Subscribe in a reader | Subscribe to poobalan.com by Email


 
this is a very interesting site.
check out their demos pages and try searching via tamil.
 
 
 
example:
ennama kannu sowkiyama?

aamama kannu, sowkiyathan.

translated:

எண்ணமா கண்ணு சொவ்கியமா?

ஆமாமா கண்ணு, சொவ்கியத்ன்

if you can't view the translator font, you need to install tamil font.

 

 

25 Web Sites to Watch by PCWorld

June 19th, 2007
|  Subscribe in a reader | Subscribe to poobalan.com by Email


interesting article. Try Quintura, a visual search engine.

poobalan

by Preston Gralla Mon Jun 18, 4:00 AM ET

source

Think that all of the great Web sites have already been invented? Think again. The Internet is evolving in new and inventive ways thanks to mashups that pull data from all over the Web and to AJAX-based interfaces that give sites the same degree of interactivity and responsiveness that desktop apps possess.

» Read more: 25 Web Sites to Watch by PCWorld

Google launches Google Gears – to allow OFFline use

May 31st, 2007
|  Subscribe in a reader | Subscribe to poobalan.com by Email


Gears puts Google in the driver's seat

 

The Google Gears plugin download page.

The Google Gears plugin download page.

 
Stephen Hutcheon
May 31, 2007 – 9:00AM
 

Google is rolling out a technology designed to overcome the major drawback faced by all web-based applications: the fact that they don't work without an internet connection.

Google Gears is an open source technology for creating offline web applications that is being launched today at Google's annual Developer Day gatherings around the world.

"With Google Gears, we're tackling the key limitation of the browser in order to make it a stronger platform for deploying all types of applications and enabling a better user experience," Google CEO Eric Schmidt said in a statement.

The Google Gears technology is designed to be used for web applications such as email and word or image processing.

While it can be used with non-Google applications, it's clear that the web search and advertising giant will be the major beneficiary of what is expected to be an enthusiastic take up.

That enthusiasm is not expected to extend to Microsoft. Google has already invaded the software company's turf, offering Google Apps – its package of workplace programs – as an alternative to Microsoft's Office suite.

To date, the Google replacement proposition hasn't been appealing to large private and public sector organisations partly because of the lack of offline access.

Launched in February, Google's suite of web-based programs includes a word processor, email, a spreadsheet and a calendar.

Google said it would charge corporate customers $US50 ($61) a year for the suite, about a tenth of what Microsoft charges for its Office package.

But there haven't been many takers. In February, it was reported that the Commonwealth Bank suspended a trial of Google Apps, which it was looking at rolling out for its 50,000-strong workforce.

The Gears technology promises to give Google a better platform from which to go after Microsoft's very lucrative Office franchise.

"This is a core piece of technology that we're releasing to the community to really help move the industry forward on solving this problem," Google Australia's senior product manager Carl Sjogreen told smh.com.au.

"For your average web user, the end goal is that basically it's seamless whether you're connected to the internet or not."

He described Gears as a "critical missing piece in the evolution of making the web and the browser a platform for all applications".

The search for a way to give web-based programs the stability and portability of desktop applications has been going on for over a decade.

Several organisations, including Mozilla Corporation, Adobe and Opera Software, have been working on a similar project and are backing the Google push.

Mozilla has already flagged that its upcoming Firefox 3 browser will support offline applications.

To start the ball rolling, Google has "Gears-enabled" its RSS feed reader, Google Reader.

After downloading the Gears plug-in, the browser will automatically determine whether a user is online or offline. If it's the latter, the next time the user is online, the application will synchronise with the server.

Google says it will work with others in the web community to help develop an industry standard that will further facilitate the rollout of hybrid programs which work both online and offline.

"It's something that we're making this available in its early stages and in an open source environment so that everyone can help test its capabilities and help improve upon it," said Mr Sjogreen.

"As more and more people are depending on web applications to manage their lives and get information about what's going on, it becomes and increasing problem when you can't access those applications when you're offline."

 
Google Gears – the game has changed
Posted by Marc Orchant @ 10:30 pm

source

I’m not often left feeling completely astonished these days. I like to think I’m pretty on top of where things are going. But I just got completely blindsided by Google Gears. There’s already plenty of first-glance analysis to help you grasp the magnitude of what they’ve done. I recommend you start by listening to David Berlind’s podcast interview with Linus Upson, a director of engineering at Google about the back story on Gears and what Google is aiming to accomplish with this broadside.

Then you can pop over to Techmeme and read until you can’t take any more guessing, prognosticating, and crystal ball gazing. There’s a huge thread of posts and counter-posts already piling up and at this hour (10:25 p.m. Mountain time) the pace with which this is pushing everything else off the page is pretty impressive.

Rather than trying to tell you “what it all means”, I thought a quick display of Gears in action would be infinitely more interesting. Here’s what I did in about five minutes to turn Google Reader, the tool I’m using to manage my RSS habit these days, into an offline reader. Follow along because I think you’ll be every bit as blown away as I am at how easy this is.

Step 1 – Install Google Gears (as a Firefox add-in in my case). Windows, Mac and Linux Firefox are supported as is Internet Explorer. Safari support is promised soon according to the podcast interview mentioned above.

Step 2 – Click the offline button in Google Reader (next to the account name in the upper right corner of the window). Google Reader asks if you want to download content before going offline. Downloading 2000 items took only a couple of minutes over a WiFi connection.

Step 3 – Disconnect from the intertubes and read your RSS feeds as if you were still connected. When you reconnect to the network, Google Reader synchronizes your local changes (items read, shared and/or starred) with the server and updates new content from your subscription list. Seamless..

Step 4 – There is no Step 4.

This is big folks. In my admittedly limited testing the offline reading experience is completely consistent with what I’ve come to expect when working with Reader online (with the exception of images which are not downloaded for offline viewing). Google is open-sourcing Gears and, as David points out in the post accompanying his podcast interview, they’ve taken a huge step towards defining a de facto standard for taking web apps offline. The reason I think this isn’t just crazy Web 2.0 hype is that Adobe has announced they are aligning their Apollo efforts with the approach Google’s taken with Gears as there are significant similarities in how the two companies have have approached their online/offline application solutions.

There are probably a few freaked out people in the web and hybrid application worlds right about now. Because the game has changed.
 

Microsoft Debuts ‘Minority Report’-Like Surface Computer

May 30th, 2007
|  Subscribe in a reader | Subscribe to poobalan.com by Email


 
Melissa J. Perenson, PC World
 
Tuesday, May 29, 2007 9:00 PM PDT

After five years of keeping the project shrouded in secrecy, Microsoft today revealed its plans for Microsoft Surface, the first product in a category the company calls "surface computing." The technology, formerly code-named Milan, lets Microsoft turn a seemingly ordinary surface, such as a tabletop or a wall, into a computer. Introduced today at the D: All Things Digital conference in Carlsbad, California, Microsoft Surface is a "multi-touch" tabletop computer that interacts with users through touch on multiple points on the screen.

The concept is simple: Users interact with the computer completely by touch, on a surface other than a standard screen. "It will feel like Minority Report ," promises Pete Thompson, general manager of Microsoft's surface computing group. "Very futuristic–but it will be here this year."

"We see it as the first of its kind in a new category of computing device. It's very approachable for users; the learning curve should be very instinctual," says Thompson.

Mark Bolger, director of marketing for Microsoft's consumer productivity experiences group, adds, "This is a NUI–a natural user interface. It's a natural way for people to interact with digital content using their hands. Users can control information with the flick of a hand."

The product unveiled today will be Microsoft branded and available to the company's four partners–Harrah's Entertainment, International Game Technologies, Starwood Hotels, and T-Mobile–in November. Starwood Hotels plans to put Microsoft Surface devices in common areas, to provide functions such as a virtual concierge; T-Mobile will use them to enhance the cell phone shopping experience. Microsoft expects to deploy dozens of units with each of its partners by year's end.

Advent of Social Computing

Never mind today's buzz about social networking–with Surface and its multi-touch technology, Microsoft envisions a new era of social computing. Certainly, the horizontal, tabletop configuration of Surface raises a variety of possibilities, such as friends gathering for drinks in a hotel lounge and sharing photos and videos.

Bolger notes four attributes that comprise Microsoft's definition of surface computing: direct interaction (for example, you might "dip" your finger on an on-screen paint palette, and then use your finger to draw on the screen); multi-touch contact, so the screen can react to multiple fingers and inputs simultaneously; multi-user experience, so multiple people can gather around and interact with the screen simultaneously; and object recognition, so the surface can recognize tagged objects and interact with them.

The demo is impressive. In the paint application Microsoft showed me, I could put my fingers down on the surface and draw, and suddenly I had yarn-like Raggedy Ann hair on my impromptu drawing. A digital photo gallery let me shuffle through images as easily as I would piles of photos in my grandmother's shoe box–only now I could also enlarge and rotate any image I liked.

David Daoud, an analyst for market research firm IDC, is a believer. "[Microsoft Surface] itself is an innovation; it's a form factor that's long overdue. [It] focuses more on user experience than what the industry is used to producing–desktops, notebooks, computing devices that look like each other. Microsoft has done its homework, in terms of understanding how people behave and improving user experience. [Surface] really brings the computing experience to a different level than consumers are used to."

Inside the Table

Microsoft Surface couples standard PC components with the cameras and projectors necessary to enable surface computing. The demo unit employed a 3-GHz Pentium 4 CPU, 2GB of RAM, and an off-the-shelf graphics card with standard drivers (and Microsoft's own application layer to allow the GPU to help with sensing touch).

The images the PC outputs are displayed on the tabletop surface through a short-throw DLP projector contained inside the table; the lens is just 21 inches from the surface. The rear-projection system produces a 30-inch-diagonal, 4:3-aspect-ratio image at a resolution of 1024 by 768 at 60 Hz.

The table also houses a power supply, stereo speakers, an infrared illuminator, and five overlapping cameras that sense movement on its surface. The cameras feed images of objects on the surface–be they fingers or tagged objects such as game pieces, a Wi-Fi camera, or a digital audio player–back into the computer, where they're processed mostly in the GPU, according to Nigel Keam, one of Microsoft's architects behind Surface.

The specially treated surface's multi-touch capability has no implicit limit, says Keam. "We optimize it for 52 [points of touch], based on the most extreme reasonable scenario we could come up with: Four people with all fingers down, and 12 game pieces in the center."

One of the hardest things about working with the technology was to get the touch surface right. Developers had to walk a fine line in creating a surface that's opaque enough to hold a rear-projected image but translucent enough for cameras to see through it. "You need a strong diffuser on the topmost surface," Keam notes, "but the camera wants to see straight through the diffuser to what's on the surface. So it's a balancing act. We had to research a lot of different ways to make the surface look right, feel right, and be tough. Everything meets at this one layer."

The device's infrared capability means you can do more than just use your fingers on the tabletop surface. Tags on a Wi-Fi camera or a digital audio player, for example, could be used to transfer images, music, or playlists. Or perhaps a card could store your account information and let any Microsoft Surface unit grab your images from a central server. Tagged pieces might generate special effects for drawings or images, and puzzle pieces could act as props in interactive games.

How does this work? Let's take the example of video puzzle pieces, a game in which you have to assemble a jigsaw puzzle made of glass, and the puzzle pieces have video projected on them. "The illuminator shines infrared up, which illuminates the tags on the glass pieces and reflects the IR image off the tags," explains Keam. "The cameras pick up the images of those tags, and pass them on to the computer, which processes the images and figures out where the tags are, and thereby where the pieces are. This way, the computer knows where the tags will be on each piece. The computer then chops the appropriate square out of the video playing back, because it knows where each piece is supposed to be, and then it's projected back to the piece."

Future Touch

"I think our approach of starting first in commercial space will allow consumers to change how they shop and how they're entertained," says Microsoft's Bolger. "It will help them understand how surface will change their lives. Over time, we'll go beyond the leisure and entertainment industries, and move into different environments, such as schools, businesses, homes.

"We're balancing public perception of what's the future and what's now. Interacting with the wall is here today."

IDC analyst Daoud notes that the rollout may be slow, but the introduction of Surface will get consumers, and the industry, thinking about alternative computing. "You will see us now talk about this concept of surface computing–about how you get away from the usual input devices. The technology is so interesting that I think the wow impact will be there from the beginning. Consumers will be more impressed with [Surface] than with anything they've seen in computing innovation in the past several years."

Listen to the World’s Soundscapes in Google Earth

May 30th, 2007
|  Subscribe in a reader | Subscribe to poobalan.com by Email


Listen to the World’s Soundscapes in Google Earth

 
Bernie Krause’s company, Wild Sanctuary, has released over 50 CDs of wildlife recordings and bio-soundscapes. Bernie and his team have traveled to all corners of the world with microphones, recording the sounds of nature, the bustle of cities and the voices of hundreds of species of animals. In addition to recording the various critter sounds in the wild, they also logged metadata like location, altitude, weather conditions and time of day for each recording.

Today, Wild Sanctuary released a Google Earth layer that maps its various soundscapes and all of the collected metadata on Google’s 3-D world map. Bernie demoed the KML dataset at Where 2.0 by flying around in Google Earth, zooming in on the Amazon basin to play the whoops of howler monkeys, then flying to Canada’s Algonquin National Forest to hear a pack of wolves howling at a full moon.

Bernie has it all — whale sounds recorded underwater in Maui, Gibbons in Africa and the deafening din of mid-day in the wilds of the Indonesian jungle. Download the KML layer for Google Earth at earth.wildsactuary.com.