Wednesday, February 24, 2010

Hacker attacked Intel at same time as Google

Chip maker Intel has announced that it was a victim of a 'sophisticated' hacker attack this year at about the same time when Google was attacked. Intel disclosed the attack in a regulatory filing late on Monday. It doesn't necessarily mean that Intel was infiltrated or that the attackers were the same ones that targeted Google.



Intel spokesman Chuck Mulloy said that the attack on Intel wasn't broad-based like the one that hit Google. He said Intel isn't aware of any intellectual property being stolen. Intel, like other major corporations, faces constant computer attacks. Mulloy said the company was only pointing out there was a connection in terms of the timing of the Google attacks as part of a disclosure to investors about the company's risks.

The disclosure comes amid increased fears of state-sponsored espionage targeting corporate computer networks. Google revealed last month that its network was attacked from inside China and that the intruders stole intellectual property - an attack that Google says could cause it to leave China. Google said at least 20 other companies were targeted as part of the attack, but those companies weren't identified. Software maker Adobe Systems and Rackspace, a Web hosting service, have acknowledged being targets.

Tuesday, February 23, 2010

Data thefts cost firms $2 Million a year

Trade secrets and customer information thefts costed firms an average of $2 million each last year, according to research conducted by security software maker Symantec, reports Bloomberg.

In a survey of 2,100 IT executives worldwide, 75 percent of respondents reported cyber attacks last year. Most intrusions were aimed at stealing a firm's intellectual property, such as product designs. "We can expect to see companies going out of business because their intellectual property is stolen," Maureen Kelly, a senior Director of Product Marketing, said in an interview to Bloomberg. "For some, this is a matter of life or death."



Businesses coped with the recession by trimming staff to handle security, while hackers have become more skillful. Last month, Google said, it and at least 20 other companies had suffered a series of 'highly sophisticated' online attacks originating in China.

In many cases, hackers have switched their attentions to stealing trade secrets, according to Symantec. "While hackers are still looking for customer data, such as credit card information, they are now going more for industrial espionage and counterfeiting," Kelly said in an interview.

"They are also more specialized - with one person to break in, another to get the data, a third to install the software to steal information, and a fourth to encrypt it and distribute it," she said.

Developing countries may face e-waste crisis: UN

If proper electronic-waste recycling is not established in developing countries, they will face serious environmental and public health consequences, says a report by United Nations. According to UN, the urgency in addressing e-waste disposal is driven by the sharp rise in sales of electronic products expected over the next decade in emerging countries like China and India, across continents such as Africa, and over large regions including Latin America.



Such imports are expected to add millions of tons of e-waste in regions where recycling efforts are inadequate to handle even current e-waste levels, reports InformationWeek. While inadequate recycling efforts are not being properly addressed, the quantity of e-waste that exists today is growing.

For example, e-waste from old computers is expected to jump from 2007 levels by 200 percent to 400 percent in South Africa and China and by 500 percent in India. E-waste from discarded mobile phones will be about seven times higher than 2007 levels in China and 18 times higher in India, the report released Monday from the UN Environment Programme said. E-waste from televisions will be 1.5 to two times higher in China and India. This year, China is expected to produce about 2.3 million tons of e-waste domestically, second only to the U.S. with about three million tons.

Among the recommendations in the report is for countries to establish e-waste management centers of excellence that build on existing organizations working in the area of recycling and waste management.

Thursday, February 18, 2010

Microsoft includes social networks into Office 2010

Trying to make Outlook more users friendly, Microsoft plans to include social networking services into its latest generation Outlook email program, to be released with an Office 2010 set of applications later this year.

In a video posted at the U.S. software firm's website, Dev Balasubramanian, Outlook Office Group Product Manager said, "It really is about bringing friends, family, and colleagues into you inbox. As you communicate with them you can see their social activities; you can see all of the folks in your social network and it updates as you are reading your email."



Software that channels LinkedIn updates to Outlook inboxes was available online on Wednesday at linkedin.com/outlook for people dabbling with a test version of the popular email program. Now Microsoft is talking to Facebook and MySpace to do the same with content from those online communities. The LinkedIn connection to Outlook will allow people using the email program to stay in tune with any changes in job status, contact information, or affiliations being shared by friends at the career-focused online community. Elliot Shmukler, Product Management Director, LinkedIn said, "LinkedIn is all about your professional network. Outlook powers the professional inbox so the match is very clear."

Microsoft's announcement came shortly after Google fell into trouble with Electronic Privacy Information Center filing a complaint with the U.S. Federal Trade Commission calling for an investigation into whether the original Buzz wrongly disclosed too much information about people. Google Buzz was launched with a feature that automatically created public social networks based on the Gmail contacts people most frequently sent messages to. Electronic Privacy Information Center on Wednesday filed a complaint with the U.S. Federal Trade Commission calling for an investigation into whether the original Buzz wrongly disclosed too much information about people.

Tuesday, February 16, 2010

Google plans superfast internet

Google plans to build a fibre optic broadband network that will connect customers to the internet at speeds 100 times faster than most existing broadband connections in the US, the company announced on its corporate blog.

"Our goal is to experiment with new ways to help make internet access better and faster for everyone," two Google product managers, Minnie Ingersoll and James Kelly, wrote in the blog post Wednesday.



They said that Google plans to build and test the network in trial communities around the country starting later this year and that the tests could encompass as many as 500,000 people. They cited 3-dimensional medical imaging and quick, high-definition film downloads among the applications of such high-speed internet access.

"We'll deliver internet speeds more than 100 times faster than what most Americans have access to today with 1 gigabit per second, fibre-to-the-home connections," the post said. "We plan to offer service at a competitive price to at least 50,000 and potentially up to 500,000 people."

"We're doing this because we want to make the web better and faster to everyone," said Kelly, who also promised that the network would operate on open access network, in which users could choose various internet providers and which would not give preference to any one kind of content. Kelly appealed to local officials who were interested in having their community participate in the trial to contact the internet giant.

The announcement continued Google's recent initiative to expand into market sectors beyond its core web search speciality. In the last year it has made a splash in the mobile phone market with its Android operating system and Nexus One handset, and Tuesday announced a social networking feature aimed at taking on Facebook and Twitter.

While broadband industry incumbents may fear the entry by Google, Federal Communications Commission chairman Julius Genachowski welcomed the move, the Washington Post reported.

"Big broadband creates big opportunities," he said in a statement. "This significant trial will provide an American testbed for the next generation of innovative, high-speed internet apps, devices and services."

Basics of six sigma

Introduction to Six Sigma :

What is Six Sigma?

  • It is a highly disciplined process.
  • Helps us focus on

> Developing; and

> Delivering.


  • Near – perfect Products & Services.
  • It is a management philosophy.
  • Customer based approach realizing that defects are expensive.
  • Fewer defects > Lower Cost > leads to >Improved Customer Loyalty.
  • Way to achieve strategic business results.
  • Six Sigma is statistics.
  • Produce less than 3.4 defects per million opportunities.
  • Organizations do not achieve this, indicating – there is still Opportunity.
  • To implement and achieve the six sigma level of 3.4 defects per million opportunities or less, a Process is required and used.
  • Sigma – A statistical term, measuring how far a given process deviates from perfection.
  • The central idea is to measure defects in a process.
  • Then, eliminating defects and get as close to “Zero - Defect” as possible.

Six Sigma is:

  • The structured application of tools and techniques.
  • Applied on project basis to achieve sustained strategic results.


Six Sigma - Key Concepts

At its core, Six Sigma revolves around a few key concepts.

Critical to Quality: Attributes most important to customer.


Defect: Failing to deliver what the customer wants.


Process Capability: What your process can deliver.

Variation: What the customer sees and feels.


Stable Operations: Ensuring consistent, predictable processes to improve Variations.


Design for Six Sigma: Designing to meet customer needs and process capability.

The Six Sigma Methodology

Six Sigma has two key methodologies:

D M A I C: Define, Measure, Analyze, Improve, Control.

It consists of following five steps:

1.Define the process improvement goals that are consistent with customer demands and enterprise strategy.

2. Measure the current process and collect relevant data for future comparison.

3.Analyze to verify relationship and causality of factors. Determine what the relationship is, and attempt to ensure that all the factors have been considered.

4. Improve or optimize the process based upon the analysis using various techniques.

5.Control to ensure that any variances are corrected before they result in defects. Set up pilot runs to establish process capability, transition to production and thereafter continuously measure the process and institute control mechanisms.



D M A D V: Define, Measure, Analyze, Design, Verify.

It also consists of following five steps:

1.Define the process improvement goals that are consistent with customer demands and enterprise strategy.

2.Measure and identify CTQ’s (Critical to Qualities), product & production capabilities and risk assessments
3.Analyze to develop and design alternatives, create high – level design and evaluate design capability.

4. Design details, optimize the design, and plan for design verifications. This phase may require simulations.

5.Verify the design, set up pilot runs, implement production processes and handover to process owners.

Roles for implementing Six Sigma

To implement Six Sigma in an organization, the following roles are instrumental:


CHAMPIONS

  • Drawn from Upper Management.
  • Responsible for Six Sigma implementation.
  • Act as mentors to Black Belts.

Master Black Belts


  • Identified by Champions.
  • Act as in-house expert coach.
  • Guide Black and Green belts.

Black Belts


  • Operate under Master Black Belts.
  • Devote 100 % of their time to Six Sigma.
  • Focus on Project Execution.

Green Belts

  • Employees who take up Six Sigma implementation along with their other job responsibilities.
  • Operate under the guidance from the Black belts.

Green and Black belts are empowered to initiate, expand, and lead projects in their area of responsibility.


So wishing you all a zero defect process management n conrol :)

Nokia launches 5233, for $160 in India























Nokia has launched a new model, the 5233 touch screen phone which is apparently the cheapest Nokia touch screen mobile you can purchase in India at $160, reports infocera.

The Nokia 5233 is built for entertainment, with a 3.2 inch touch screen, touch user interface, Nokia Ovi player, FM radio, email, Bluetooth, GPS with Ovi Maps, and a battery life of up to seven hours.


Along with this handset, the buyers will also receive an AC-8 charger, WH-102 stereo headset, PlectrumStylus CP-306, BL-5J battery and user guide. It is available in black and red colours.

One great specification is that you can access your social networking sites, and also manage up to 10 email accounts.

Combined PCs beat 2nd fastest supercomputer

Legions of personal computers (PCs), engaged in a project to map the Milky Way, beat the world's second fastest supercomputer in sheer performance.

At this very moment, tens of thousands of PCs worldwide are quietly working together to solve the largest and most basic mysteries of our galaxy.

Enthusiastic volunteers from Africa to Australia are donating the computing power of everything from decade-old desktops to sleek new netbooks to help computer scientists and astronomers at Rensselaer Polytechnic Institute map our Milky Way.



Now, just this month, the collected computing power of these humble home computers has surpassed one petaflop, a computing speed that surpasses the world's second fastest supercomputer.

Since the project began, more than 45,000 individual users from 169 countries have donated computational power to the effort. Currently, approximately 17,000 users are active in the system.

The project, MilkyWay@Home, uses the Berkeley Open Infrastructure for Network Computing (BOINC) platform, which is widely known for the SETI@home project, used to search for signs of extraterrestrial life.

Today, MilkyWay@Home has outgrown even this famous project, in terms of speed, making it the fastest computing project on the BOINC platform and perhaps the second fastest public distributed computing programme ever in operation (just behind Folding@home).

The interdisciplinary team behind MilkyWay@Home, which ranges from professors to undergraduates, began the formal development under the BOINC platform in July 2006 and worked tirelessly to build a volunteer base from the ground up to build its computational power.

Each user participating in the project signs up their computer and offers up a percentage of the machine's operating power that will be dedicated to calculations related to the project.

For the MilkyWay@Home project, this means that each personal computer is using data gathered about a very small section of the galaxy to map its shape, density, and movement.

In particular, computers donating processing power to MilkyWay@Home are looking at how the different dwarf galaxies that make up the larger Milky Way galaxy, have been moved and stretched following their merger with the larger galaxy millions of years ago.

This is done by studying each dwarf's stellar stream. Their calculations are providing new details on the overall shape and density of dark matter in the Milky Way galaxy, which is widely unknown.

The galactic computing project had very humble beginnings, according to Heidi Newberg, associate professor of physics, applied physics, and astronomy at Rensselaer.

Her personal research to map the 3-D distribution of stars and matter in the Milky Way using data from the extensive Sloan Digital Sky Survey could not find the best model to map even a small section of a single galactic star stream in any reasonable amount of time.

"I was a researcher sitting in my office with a very big computational problem to solve and very little personal computational power or time at my fingertips," Newberg said.

"Working with the MilkyWay@Home platform, I now have the opportunity to use a massive computational resource that I simply could not have as a single faculty researcher, working on a single research problem."

Before taking the research to BOINC, Newberg worked with Malik Magdon-Ismail, associate professor of computer science, to create a stronger and faster algorithm for her project, says a Rensselaer Polytechnic release.

Together they greatly increased the computational efficiency and set the groundwork for what would become the much larger MilkyWay@Home project.

Wednesday, February 10, 2010

Readers over 60 favour internet to satisfy love for books

Older British readers are using internet to revive their love for books by writing on the web and joining book club. Some 31 percent of people over 60 are keen to go online to publish short stories and join book clubs, the survey by charity Booktrust in Britain found, according to Reuters.

Mark Johnson, Digital Producer for HarperCollins Authonomy website said it was attracting high numbers of people over 50. "Perhaps this is because age and experience can offer a clear advantage to anyone hoping to write engagingly or perhaps older people now have more time, and are more confident, to share their passions online. But our experience suggests that older generations aren't just learning how to use the web - they're taking advantage of it like never before" he said.



The survey of 1,162 over-60s found 55 percent view the internet as a crucial part of their lives and that 93 percent see it as a positive development, with over 32 percent stating that they find having access to the internet liberating.

Tuesday, February 9, 2010

Almost all good news: 100 days of Windows 7 on Feb 8th 2010

Unlike politicians, operating systems (OS) don't get a honeymoon with the general public. Windows 7 has been on the market for almost 100 days now, so - as in politics - it's a good time to review how the software has performed so far. The results are largely positive.

First and foremost, Microsoft has to be pleased with sales, which have been brisk. Just a week after the Windows 7 launch Oct 22, 2009, the sales figures had already bested the company's expectations. "Compared with the start of Windows Vista, five times as many consumers have opted for the new operating system in the first five days," Microsoft reported.


Even better: despite millions of new installations, no major problems have been reported. "There have been astonishingly few problems with Windows 7," says Axel Vahldiek from German computer magazine c't. He'd know: his magazine fields questions from readers. Unlike the OS's predecessor, Windows Vista, the questions received by c't general involve minor issues.

That said, even the little things can rub nerves the wrong way. "The biggest problems are coming from older hardware," says Axel Vahldiek. If the manufacturer doesn't produce Windows 7-ready drivers, then the device will either refuse to work under the new OS or offer limited functionality. The difficulties are most prevalent in peripheral devices like scanners with SCSI ports.

The blame shouldn't necessarily be laid at Microsoft's door, though. The device makers sometimes make things difficult by design, Vahldiek explains. They might be speculating that those affected by problems will buy new hardware and throw their old devices out if they don't offer enough functionality. The hardware inside the PC usually works without a problem.

No major security holes have been identified yet. Microsoft clearly learned its lesson from the painful introduction of earlier operating systems. "From a security standpoint, Microsoft's Windows 7 has made significant progress over its prior versions XP and Vista," reports the German Federal Agency for Security in Information Technology (BSI). Attacks on the system itself have become so difficult that viruses are instead focusing on vulnerabilities in third-party applications.

The experts at the BSI nevertheless still see some room for improvement: given the strong protection mechanisms in Windows 7, it's a shame that Microsoft fails to preset all user accounts as "restricted".

The typical procedure instead requires that an administrator account be set up. This allows potentially vulnerable applications an unnecessarily high level of permissions. "The administrator account that Microsoft has conveniently added for managing user accounts nevertheless fails to represent an effective barrier here."

The BSI's grades for Windows 7 are better for the protection of user data using the BitLocker hard drive encryption function. This has been reworked to be significantly more user friendly. Then again, it is also only available in the two most expensive versions of Windows 7: Ultimate and Enterprise.

Because bugs are an inherent part of any software release, especially for software as complicated as modern operating systems, users can expect updates and improvements to start arriving shortly after publication.

In the past, Microsoft has typically rolled up the improvements into multiple Service Packs (SP). No information is available yet on when "SP1" for Windows 7 can be expected, says Microsoft spokeswoman Irene Nadler.

That's okay for now, though. Unlike with XP and Vista, users of the new system can also get by just fine with the existing product until SP1 arrives.

Now, a software to translate as you speak on phone

Internet giant Google, which has also made an entry in the mobile world with its own phone Nexus One, is working on a software which will interpret foreign languages as a person speak. The translation is done almost instantly and Google hopes to have a basic system ready within a couple of years, reports Chris Gourlay of Sunday Times.



Google has already developed an automatic system for translating text on computers, which is being polished by scanning millions of multi-lingual websites and documents. So far it covers 52 languages, adding Haitian Creole last week.

Recently Google also launched a feature where a user can search on the search engine by saying the key words instead of typing. Now it is working on combining the two technologies to produce software capable of understanding a caller's voice and translating it into a synthetic equivalent in a foreign language. The phone would analyse "packages" of speech, listening to the speaker until it understands the full meaning of words and phrases, before attempting translation. "We think speech-to-speech translation should be possible and work reasonably well in a few years' time," said Franz Och, Google's Head of Translation Services.

Although automatic text translators are now reasonably effective, voice recognition has proved more challenging. "Everyone has a different voice, accent and pitch," said Och. "But recognition should be effective with mobile phones because by nature they are personal to you. The phone should get a feel for your voice from past voice search queries, for example."

Soon, Gmail to allow status updates like Twitter

The rising popularity of status updates on Twitter and Facebook seems to have inspired Google. Google will soon allow users to share their status with other connections, just like on all popular Social Networking sites. Even though the news is not official, but the add-on is expected to be added as soon as this week, according to electronista.



Yahoo had done a similar revamp of its website to allow status updated. These updates also alerted users when their friends have uploaded photos to Flickr.

An unnamed informant says the new Google revisions will also allow users to share their YouTube and Picasa content. Gmail already lets contacts chat in the browser, set away messages and write short messages as their status.

Friday, February 5, 2010

Bluetooth 4.0

Bluetooth 4.0 will be contributing to the new generation devices in the year 2010. The hallmark of the standard is low power technology which promises to run devices on coin-cell batteries for years. Bluetooth 3.0 standard released in April 2009 was focused on providing high speed rate; however it was unable to convince manufacturers to implement it on devices because of high power consumption by application during data transfer. Bluetooth 4.0 is considered as an improvement over its predecessors due to lower power consumption, lower cost of implementation and co-existence with previous Bluetooth devices.

The all new Bluetooth technology is aimed at smaller integrated devices many of which run on button cell batteries like watches, remote controls, etc. It will allow a whole new range of gadgets like toys, sports and healthcare gadgets. Hence you'll be able to convert your home network into an extended personal area network controlling various devices remotely from anywhere in the house.

Direct Hit!

Applies To: Everyone
Price: N/A
USP: Low power technology to drive devices for years without recharge
Primary Link: www.bluetooth.com
Search Engine Keywords: Bluetooth 4.0, low power technology

It will allow low power devices to access wide area networks (WLAN,3G and GSM) via mobile phones which will give rise to enhanced connectivity hence enabling new dimension applications in the low battery devices. People will be able to download bus schedules from bus stops or product information from a store. Although, being a new technology some new changes will be required for optimizing low power radio protocol; it can also be incorporated into existing Bluetooth technologies which contributes to lower cost of implementation. Another eye catchy feature of the technology is that it consumes very less power due to low standby currents as a result of long idle times between active radio use. Due to low power and higher range, the technology has an ability to extend the network coverage.

The most widely implemented version of Bluetooth is 2.1+EDR. While Blutooth 4.0 provides similar feature as Bluetooth 2.1+EDR like maximum data transfer speed of 1Mbps and 100 mts range but at low power consumption. Bluetooth 3.0, released last year, in spite of providing data transfer rates comparable to Wi-Fi, was not accepted by the device manufacturers due to high power consumption and the bandwidth requirements .

Inside the technology
Some technical details of this technology are:

Dual and single mode operation: Dual mode alternates between high speed and low power modes. This implementation will find its way into the consumer devices where the technology already has a strong hold --mobile phones and PCs for multiple roles and data storage. Dual mode integrates the functionality of classic Bluetooth controller resulting into an architecture using classic Bluetooth technology with low energy functionality and hence minimizing cost increment. Single mode works for ultra low power uses like sensors and displays. The single mode implementation is for highly integrated and compact low power devices and it will feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multi point data transfer with advanced power-save and secure encrypted connections at the lowest possible cost.

New Bluetooth technology allows small devices to be connected to various services via mobile phones.

Host control: The host controller is programmed to keep the host dormant for longer periods and wakes up the host only when there is some action to perform. This results in saving significant power since host consumes more power than controller.

Latency: The Bluetooth low energy technology supports connection setup and data transfer with low latency (3ms) allowing an application to transfer data in few milliseconds.

Enhanced Data Rate: Enhanced Data Rate is a method of extending the capacity and types of Bluetooth packets to increase maximum throughput, providing better support for multiple connections, and lowering power consumption; while the remainder of the architecture is unchanged. Very short data packets (8 octet minimum and 27 octet maximum) are transferred at a speed of 1Mbps with all connections using advanced sniff-subrating to achieve ultra low duty cycles.

Range: Increased modulation Index provides a range up to 100 meters.

Topology: The Low energy piconet supports any number of devices limited only by resources of the master and there is virtually no limit on the number of transports however a LE (low energy) device can belong to only one piconet at a time.

Market potential
The target domains for Bluetooth 4.0 will be healthcare, sports, and entertainment. The devices with this technology will be watches which will work as CLI displays for mobile phones and controllers for various entertainment devices. In the healthcare arena, new range of wrist-bands for measuring your jogging speed and pulse rate will hit world markets soon this year.

Low power consumption of Bluetooth 4.0 will be attracting manufacturers to make low power wireless and light weight devices. These will be followed by fitness devices which will be using the handset as a gateway transferring data to remote service.

Devices and products that will benefit from low power Bluetooth 4.0 technology giving new definition to mobility

Home and entertainment market is also on the top list of Bluetooth 4.0 target markets. Remote controls with low power Bluetooth technology will allow usage of single remote control with wide range of consumer products. Think, you can control your TV and air conditioner with the same remote.Since the market of gaming consoles and controllers is rapidly growing, it has become the hot target for Bluetooth low power technology. The main use of Bluetooth in gaming is to connect multiple controllers with a single console. Some of the medical equipment benefiting from this technology will be devices like heart rate monitors, glucose monitors, blood pressure monitors.

Bluetooth 4.0 vs other low power technologies
There are many other low power technologies in the market which are already being implemented. Zigbee is best known one and Z-Wave, which is widely implemented in automation devices.

We also see a large range of propriety radios which has already covered a large area of wireless products market. Although the devices use different spread spectrum technologies, the throughput and range is more or less similar. The difference between these technologies and Bluetooth 4.0 lies in power management front.

The power consumed by a device widely depends on the speed with which it wakes up, transfer data and return to sleep. Another differentiation factor is ease of implementation. The Bluetooth 4.0 is expected to quickly get widespread adoption since the incremental cost and effort for integration in mobile phones are quite minimal.

Working From Home: Myths and Truths

Are we rapidly getting transformed into a 24 x 7 working society? Globalization of businesses has set in. Employees, business partners in the enterprises are pushed and pressurized to respond and make decisions faster than ever. The sun never sets in the 'follow the sun' business model; someone's night is some one else's day in another country at the opposite side of the globe. Work takes place anywhere, anytime, anyplace. Fuel crisis apart, people still seem to travel heavily; just observe how crowded the airports are these days and how many new airliners have come up. When people travel, they still need to work. Thus, work seems to be moving out of the traditional offices into homes, hotels, airport lounges, and taxis. The employee is no longer tied to an office location and is, in effect, boundary-less.

Workforce Mobility or Mobile Workforce
It is important to understand what types of jobs may align well with working from home option. First and foremost, whether the work from home is for you, depends on the nature of the work you do and your team situation. For example, it has been seen that it works wonderfully for certain jobs such as freelance journalism, freelance writing assignments, for those who work as independent researchers, to some extent for musicians where the seclusions and the peaceful noiseless environment is conducive for composing music. To a large extent, it is suitable for some roles in the IT/Software industry where work is mostly discussions and task allocations through telephonic meetings, accessing the Internet and web-based applications, using the company intranets and filing reports through emails, etc. To some extent 'work from home' may work for certain types of consulting work but not all types. It is certainly not at all suitable for jobs involving heavy client interfacing or interactive work (sales force, teachers doctors and other medical professionals, etc) and for those in manufacturing jobs in factory environments. From a team supervision perspective, it depends whether you, your team and your manager is co-located with you. If this is not the case, and it so happens that either your team or your manager/supervisor is located in another city/country, then it really does not matter whether you work from office or from home; as far as the team logistics is concerned. There are communication technologies such as web cameras, Net meeting utilities, video conferencing facilities, chat rooms and document hosting web-sites that let you manage that.

Categories of Mobile Workers

Total Employee Mobility' is defined as: a management concept and business strategy that takes a more holistic and integrated approach to the mobile workforce, all with the goal of improving an organization's talent management results, profitability, and agility, and ensuring employee satisfaction and well-being. There are many types of 'mobile workers:

(1) Tethered/Remote Worker - An employee who generally remains at a single point of work, but is remote to the central company systems. This includes home-workers, tele-cottagers, and in some cases, branch workers.

(2) Roaming User - An employee who works in an environment (e.g, warehousing, shop floor) or in multiple areas (e.g, meeting rooms).

(3) Nomad - This category covers employees requiring solutions in hotel rooms and semi-tethered environments where modem use is still prevalent, along with the increasing use of multiple wireless technologies and devices.

(4) Road Warrior - This is the ultimate mobile user-spends little time at office, but requires regular access to data and collaborative functionality while on the move, in transit, or in hotels. This type includes sales and field forces.

Green aspects
If organizations consider providing the working from home option (of course, based on organizational policy and defined rules to help make judgment about who should be allowed to utilize the option), many benefits could follow; being able to retain a critical talent, which otherwise could have left the organization due to personal or family related time constraints, the 'green' factor - not having to commute would mean reducing a large amount of carbon foot print. In the context of supporting the 'go-green' cause, for example, IBM estimates that something in the neighborhood of 58,000 tons of carbon dioxide emissions were not released into the air during a single year of its work-at-home program, thanks to the elimination of daily commutes for 25,000 employees. Another interesting question to explore in the 'green' regard is this � greater the number of people working from home, greater would be the reduction in the number of people working at offices and as most offices have air-conditioning units, this could eventually bring down the HVAC (heating, ventilation and air-conditioning) requirements down. Of course, the peak load calculations will need to cater for the scenario wherein all the headcount could be working from offices; but suppose an organization has a permanent mobility program for some of its employees depending on the type of work they handle, then this could possibly be a building design consideration right from the beginning. For instance, you can come across a very high-skilled professional and it so happens that he is physically challenged and so commuting is very difficult or say impossible for him. The work from home option could come in handy to attract this employee.

Challenges in working from home
Technology is not the only factor that goes in the working from home option. There are other aspects too and many other challenges and concerns to be managed that come with the 'working from home' option. The first and foremost that comes to mind is the concern about 'work productivity'. How do we 'measure' the work productivity of while collar workers when they are away from office and away from their managers/superiors who assign them the task? For blue collar workers, measuring work productivity is not an issue because most tasks for this worker class are well defined, discrete and they produce 'physical' output. There are home infrastructure and logistics issues if an individual finally ends up working from home. In Indian metros, with the space scarcity, how many of us would have the luxury of setting up an office like space in our apartments so that we can work without any disturbance even when operating from home? One would worry if the family is going to get disturbed. For example, you could be in a situation when you are constantly on calls working with you virtual team members in other geographies. If your house is a small apartment, it is but obvious that you could be disturbing the entire house-hold with your long calls!

A typical question asked by management is this � 'who should we allow to take the 'work from home option'. Clearly this is not an easy question to tackle. Precedence can be set and expectations can get built if the option is exercised indiscreetly. First of all, there has to be a clear and well thought out policy on mobility and flexibility that an organization wants to provide to their employee. The percentage of employees to take the mobility option should be known in advance with justifiable business cases with the Human Resources department involved in the evaluation of case to case basis of the circumstances under which an employee would want to opt for work from home option.

Those who are working from Home: What they feel about it?...

�I work from home as a freelance writer, and have done so for almost 17 years. Every day, I wake up, grab a mug of tea and commute from my bedroom to my home office. I haven't met most of my editors or clients. I file my work by e-mail and we communicate by e-mail unless an issue seems complex, in which case I pick up the phone. Occasionally, I attend initial client meetings, but they are rare. While I'd love to attend a meeting with my client in Belgium, I don't think that will happen any time soon.

In addition to freelance writing, I teach continuing education business writing and copywriting courses for the University of Toronto. Until last year, that involved commuting downtown one night a week. Not any more. All the courses I teach are now online. That means, students and the teacher are not driving or taking transit anywhere. All that being said, I do not have the world's most sophisticated technological set up. I have a phone and a three-year-old computer with a broadband Internet connection. If I can use this technology to successfully work from home - with editors, clients and students across Canada, in the United States and in Europe - then why can't more people in corporations, organizations and government offices do likewise? Why do they have to clog our highways every morning and afternoon to commute to work? �

The above excerpt has been taken from the article, �Working from home: Is it for you?�at http://paullima.com/blog/?p=144

Some other useful articles and their links where you can get more information on working from home are:

1] Refer �Avoid the commute: Work at home� at http://bit.ly/8QeQlt

2] �Home working: does it make sense?� at http://bit.ly/4Fxj2u

3] �Working from home: does it really work?� at http://bit.ly/6Rc8r8

4] �Trying to increase productivity? Send your employees home� at http://blogs.zdnet.com/BTL/?p=10336

Security and privacy
Given the fact that most information assets now seem to be digital in nature and residing on corporate servers, this becomes a big concern. Copying, transferring and replicating documents is easier than never before when most information exists in soft copy forms and powerful tools for document handling and emailing are easily available. Will the employee, working from home guard the confidentiality of the information he/she is handling? How will be the sensitive information be handled? Organizations will need to implement adequate security and privacy measures to prevent breach of security and privacy in the working from home scenario. Providing remote log in access will be a matter that security administrators will need to handle appropriately. Management would feel uncomfortable about not being able to watch an employee working away from office. However, this concern is valid not only for those who work from home but also for other types of 'mobile workers' that we mentioned earlier. To push the security patches and virus scanning software as well anti-virus software programs that are run on laptops connected to an organization's server for those who work from office, the same will be needed to be run on the remotely connected laptops of those working from home for an organization.

Conclusion
Mobility is not a new phenomenon; workforce has always been mobile; people have been commuting to and from work, people in sales need to complete sales transactions at a customer's site, people need to meet suppliers and prospects. The only change is that mobility is on the rise and working remotely/working from home has become a viable option. However, it has both ups and downs. Working from home has some appealing benefits, however, there are constraints, issues and challenges; it is a situation of clear trade-off. Providing mobility/remote working option to your workforce is not just about procuring and providing the latest technology and devices. There are IT infrastructure challenges to deal with when working from home. The Green benefit is worth considering in current times. Other challenges for workforce mobility include the legal, statutory and social challenges; technology is not the only challenge as is the popular belief. Organizations with clear and long term thinking supported by well thought out mobility policies have a fair chance of cracking the mobility challenge and reaping benefits.

Managing Information Over Years

Today, data centers are facing an indomitable challenge of provisioning colossal capacity of storage space at an affordable price yet meeting ever-increasing performance demands. The biggest threats that data centers and the storage industry are facing today are power consumption, housing space and environmental concerns. However, many administrators and planners fail to recognize the fact that the value of data to an organization decreases over time, as it loses its relevance, freshness, and �popularity�. One question that administrators should be asking themselves is: why should data that is decreasing in value remain in expensive front line storage, subject to the same backup, replication, and recovery policies and procedures as key data? Would it not be useful to have a system or methodology in place for analyzing and tracking data freshness, so that storage space could be made free for more fresh and relevant data, and time/ bandwidth consuming data protection policies be relaxed as data loses its value?

Direct Hit!

Applies To: Database managers
USP: Learn to manage information based on relevance to the organization
Primary Link: http://bit.ly/8wdR71
Search Engine Keyword: information lifecycle management

The noteworthy leap that the storage industry is forced to take in this regard is Storage Tiering. Here, the capacity to be provisioned is divided into separate pools of storage space with various cost/ performance attributes. At the top resides the Tier 1 pool, which is the most expensive but high performing nonetheless. The bottom tier is occupied by more cost-effective storage arrays. The next challenge is to devise a sophisticated software layer that intelligently places data into the different tiers according to their value. This concept is variously known as data classification or Information Lifecycle Management (ILM).

What is ILM?
ILM is a concept that encompasses the discovery, classification, analysis, and maintenance of data, across the entire period of its useful life. It adds structure and context to data, marking the transition from data to information. ILM is a part of the larger concept of Business Continuity Planning, but has become increasingly prominent in the storage arena in recent years thanks to several factors, including advancements in data storage management techniques and the technology that underpins it, and evolution in the storage environment, including:

  • Coexistence of Fibre Channel and iSCSI (IP-Storage) in the data center

  • SAS and SATA storage coexisting in storage systems. Storage consolidation practices, for reducing the use of solitary �islands of data� in direct attached storage (DAS)

  • Regulatory requirements for data archiving and recall (SOX, etc.)

Though many vendors offer ILM services or modules as part of their products, ILM is above all a concept or a strategy, rather than a product. However, for a practical explanation of what the concept embodies, we can safely generalize that many implementations of ILM encompass such components as:

  • Database Management

  • Storage System Performance and Monitoring

  • Storage Capacity Planning and Management

  • Business Controls for Data Degradation and EOL

How is this done?
In a tiered storage system, storage is not merely seen as a container of data. Another important dimension of intelligence is appended to every block, transitioning blocks of data into blocks of information.

Data + Intelligence = Information

This intelligence associated with every block of data, forms very vital metadata, which automatically tracks the access patterns to these blocks. Therefore, data is first classified, then moved at the block-level from tier to tier, based on frequency of access. At the peak of its popularity, data is stored in the fastest, most responsive top-tier storage on hand and subject to the most stringent replication and backup controls. Since the ILM system is constantly monitoring the data's value in comparison to other data, as it loses value, it is migrated down the chain to less expensive, less powerful storage, where it may not be accessed as frequently, or protected as carefully. In the final stage, it is migrated out of the storage system completely. Data of the lowest value is either purged from the system or transferred to other media (eg, written to tape and delivered to offsite storage) depending on the organization's policy and regulatory requirements for data end-of-life.

Why ILM?
Having examined how an ILM system can be implemented, we should next look more closely at the reasons why more and more organizations are accepting the need for a comprehensive ILM strategy.

Exponential growth of data
With data growth averaging near 80% to 100% every year, managing storage effectively has become a challenging task. Storage administrators face limited budgets, and are charged with not only expanding capacity by purchasing new hardware wisely to meet projected storage needs, but also optimize the use of existing capacity, in order to maximize the investment in current storage hardware. Moreover, any changes or additions need to be considered carefully, as the downstream effects of new hardware are often unforeseen, and can quickly wipe out any short term cost gains.

Data accessibility/freshness
As mentioned at the beginning of this article, data does not have a constant value; rather that value is changing, whether it is due to time, relevance, security, or popularity. Policies and procedures must therefore be set in place to continuously shift and monitor the location (and therefore the accessibility) of data so that information that is highest in demand is in the most accessible location.

Carbon dioxide emissions of traditional storage servers versus tiered storage servers

Cost (TOC) issues
The overall cost of a storage system is measured not just in the initial price paid for the hardware and its commissioning. The total operating cost (TOC) includes maintenance, power and cooling expenses, together with the cost to staff and train administrators. As storage arrays grow, power usage (for server operation and cooling) is just one factor that has an enormous impact on the TOC of a storage solution. If less expensive solutions are available, administrators should by all means devise a careful plan to incorporate these components, with some restrictions. When possible, additional storage technology should be adopted that does not require significant investment of time and resources to learn its operation. New solutions that are more power or space efficient should be integrated into the array.

Ability to protect and recover lost data
Because key data has to be protected against loss to ensure business continuity, the term Continuous Data Protection has come into being. It describes a scheme of ensuring data survival in the face of disasters such as power/network outages and natural catastrophes, and incorporates techniques such as backups, data snapshots and remote replication to do so. To add to the challenges surrounding data protection, regulatory requirements for the preservation and archiving of several types of corporate data continue to mount.

Data of a particularly sensitive or critical nature must be available for recall within clearly established time limits if circumstances demand it, and kept secure as well. Therefore a successful ILM implementation integrates well with the backup solution and recovery solution of an organization along several touch points. ILM dictates that as items age they can be taken offline completely and migrated to tape storage, for example, yet some data still must be available for recall, even at this point. Since only a percentage of data has to be protected in the same manner, the ILM solution must be flexible enough to manage varying CDP requirements.

Green data centers
As mentioned earlier, one of the primary challenges facing data centers today is the amount of power consumption. Thus, while the initial cost of acquisition of the storage might have been low, the higher cost of power consumption and cooling means that the TCO is very high. In addition to the tangible financial burdens this adds, the other, often intangible, concern in such a data center is its environmental impact. Today, global warming and pollution are major hazards that cannot be ignored. There are both regulatory as well as financial incentives to reducing carbon dioxide emissions, which often result in a direct cost saving due to increased carbon credits.

Conclusion
Storage Tiering in enterprise-class storage is becoming a highly desirable feature today. It is only a matter of time before the cost, environmental and performance benchmarks of a tiered system become critical parameters on which decisions of storage system procurement will be based.

Tiered storage servers implementing ILM offer a greater cost advantage and performance. It is important to realize that with storage servers with Tiered Storage and ILM enable data centers to reduce footprint, electricity costs, and CO2 emissions, for the creation of a greener and more eco-friendly data center.

New security flaw found in iPhone


To help enterprise businesses setup a bunch of iPhones as quickly as possible, iPhone allows settings configuration files to be installed over-the-air through Safari. Once it gets installed, hackers redirect all traffic through a server of their choosing.

It is also used to wreak havoc on WiFi/e-mail settings, and disable the use of Safari, mail and a handful of other first-party iPhone apps. On top of that it is possible to set the configuration file so that the user can't remove it - so once it's installed, getting it off the handset would require a full wipe, reports Mobile Crunch.



While installation, iPhone tells users that who it's from and whether or not it's a trusted source, but experts say that the anonymous hackers reporting the flaw, not only make the configuration file report back as verified, but also indicate that it was straight from 'Apple Computer' themselves.

Graphene likely to replace silicon in computer chips


To make the chips work 100 to 1,000 times faster than silicon, scientists have developed a way to put the graphene on 4-inch wafers. Graphene is a crystalline form of carbon that is made up of two-dimension hexagonal arrays which is ideal for electronic applications.

David Snyder and Randy Cavalero at Penn State said that they came up with a method called silicon sublimation that removes silicon from silicon carbide wafers and leaves pure graphene. Some scientists had tried similar process to use graphene before, but it is being claimed by EOC that they are the first group which has perfected the process to a point that lets them produce 4-inch wafers, reports Electronista.



By using the smallest wafers in a more conventional method have resulted in 8-inch graphene wafers. The processors which are used now-a-days are roughly 11 inches across.

Tuesday, February 2, 2010

India to launch its own satellite for satphone


After depending on foreign satellites for long time, ISRO plans to launch its own satellite by 2011, which will carry a large S-band transponder to help India provide its own signals for satellite phones.

On the sidelines of the India Semiconductor Association's (ISA's) Vision Summit, former Indian space agency Chairman G Madhavan Nair said, "We are in the process of building a high-beam antenna which will be deployed on board a satellite for providing satellite phone (satphone) services using S-band transponder. We can connect handheld devices when it is launched in a year or so."



Indian space agency has already designed the antenna that will be mounted on the spacecraft for dedicated satphone services. Once the satellite gets launched, India will become one of the major players in using satellite phones. This might also help in bringing down the cost of satellite phone services. Nair said, "Presently, some foreign satellites are being used for satphone services in the country. Development work is on to synthesize the antenna. It will be deployed in one of the communication satellites with S-band transponder."

Monday, February 1, 2010

Are e-book readers too old, limited?

AAre e-book readers too old, limited?fter less than a year of its launch, Amazon Kindle seems to be having tough time in the market, as users feel that the usage of Kindle is old and limited compared to smartphones, which have touchscreen, media playback and third-party apps, reports Electronista.

Older readers are friendlier with Kindle but feel that news delivery on the Kindle is still limited and omit components they like, such as crossword puzzles or all the secondary sections of a physical newspaper.

According to a study done by University of Georgia, a device like Apple's iPad may be better-suited to current readers than dedicated devices like the Amazon Kindle.

More than its usage, all age groups are objecting to the price of the Kindle. At $489, the Kindle DX is too expensive solely to read news. An e-book reader like the Sony Reader Daily Edition adds a touchscreen and costs less at $399. Launched just last week, iPad supports the new hardware's approach to reading and address both the smartphone-level app platform and media features as well as older users' desires for games.

Intel and Micron to launch 25nm flash memory chips

To give the companies a significant cost advantage over rivals, Intel and Micron are all set to launch a new 25-nanometer chips today via their IM Flash Technologies joint venture. This is likely to be the first commercial chip products made using advanced 25nm manufacturing technology, reports IDG News Service.

An Intel official said that the new chips are aimed at smartphones, solid-state drives (SSDs), and portable media players such as iPods. "We are currently sampling it with production expected in the second quarter," said Intel said via e-mail.



Samsung Electronics, one of the world's largest producer of flash memory, is starting work on 30nm technology this year and plans to use it in most production lines by the end of 2010.

The demand for smaller cheap increased as developing smaller chip manufacturing technology is crucial to meeting user demand for small devices that can perform many functions, such as smartphones with built-in music players, cameras and computers. Smaller etching technologies also enable companies to increase chip speed and reduce power consumption. Advances in chip manufacturing technology also lower costs over time, a major benefit to consumers.

The analysts have predicted that the manufacturing cost of the new 25nm flash chips will be about $0.50 per gigabyte (GB), compared to $1.75 per gigabyte for mainstream 45nm flash. The market price of flash chips has been hovering around $2.00 per gigabyte, Objective Analysis said, and will likely remain there throughout 2010. Currently, both Intel and Micron are offering chip samples to customers so that they can start to plan them into gadget designs.

Indian startup to help copy your brain on computers

Now, Swiss scientists and PIT Solution, a little-heard of IT startup in Technopark in Kerala will be working on the Blue Brain Project, the world's first comprehensive attempt to reverse-engineer the mammalian brain, reports Financial Express.

The $3 billion project is expected to be completed by 2018, said Brain Mind Institute of Swiss Federal Institute Director Henry Markram to Financial Express. The project is billed as an attempt to build a computerized copy of a brain - starting with a rat's brain, and then progressing to a human brain-inside one of the world's most powerful computers. It is an international project, propelled by Swiss Federal Institute, and involves several countries and ethics monitoring by UN bodies. India is yet to be part of the project.



The immediate purpose is to understand brain function and dysfunction through detailed simulations. "The study of rhodent brain has given us a template to build on. This would help in unraveling human brain," says Markram. "The whole idea is that mental illness, memory and perception triggered by neurons and electric signals could be soon treated with a supercomputer that models all the 1,000,000 million synapses of brain."

The key finding is that irrespective of gender and race, human brains are basically identical. "We will be able to map the differentiations by nuancing the patterns later. The exciting part is not how different we are but how similar we all are," says Markram.