On August 11, 2015, Google Appointed Sundar Pichai as the new CEO for the company. Sundar was formerly a Product Chief and he became another Indian-born officer to lead a global corporation, after Microsoft’s Satya Nadella.

Sundar Pichai was born in Madras and he graduated from IIT Kharagpur in metallurgical engineering. Later, he joined Google as the product manager in 2004 where he managed products like Google Chrome, Gmail, Google Maps, Google Drive etc. In April 2011, he became the Director of Jive Software and on August 10, 2015, he was Appointed as the CEO of Google.

There are many Indians who are (or have been) the CEOs for multinational companies like PepsiCo, Adobe, Citibank etc.

Here is a list of CEOs in global companies who are Indian-born:

1. Sundar Pichai
Company: Google
2. Satya Nadella
Company: Microsoft
3. Indra Nooyi
Company: PepsiCo
4. Ajay Banga
Company: MasterCard
5.Rajeev Suri
Company: Nokia

Read More


People using smartphones are more likely to make rational and unemotional decisions compared to computer users when presented with a moral dilemma on their device, according to a new study.

A new study has found that your device of choice may influence how you make moral decisions when using it.

Researchers have discovered that people who opt for a smartphone over a PC are more likely to make rational and unemotional decisions when met with a moral dilemma on their phone – whereas desktop users base their actions on intuition.

The team has suggested that this was a result of the increased time pressures and psychological distance that occur with a smartphone.

Researchers from City , University of London in the UK found that PC users were more likely to favour action based on intuition and following established rules. The research suggests that moral judgements depend on the digital context in which a dilemma is presented and could have significant implications for how we interact with computers.

The study suggests that even under conditions of time pressure, some digital contexts -such as using a smartphone -could trigger utilitarian decision-making.

Read More


A new research study of high-tech medical robots arrived at a curious conclusion this week: Toddlers are geniuses.

Well, they didn’t quite phrase it that way, but that’s the essential takeaway from a series of studies on how machines can help injured people learn to walk again. It turns out that, from a biomechanical point of view, walking is an especially tricky business — and those toddlers are working a lot harder than we thought.

The study, published today in the journal Science Robotics, comes from Harvard’s Wyss Institute for Biologically Inspired Engineering. The research involves robot-assisted gait therapy, which is in itself a pretty amazing slice of technology.

RELATED: Electric Coating Turns Fabrics Into Soft Exoskeletons

The scientists believe this discrepancy is caused by unconscious reactions in the central nervous system, which is primarily concerned with maintaining stability from step to step. Lifting your foot a bit higher in mid-stride doesn’t destabilize you all that much. But a longer stride throws off your center of mass, causing involuntary adjustments and basically freaking out your nervous system.

This prioritization of stability means that other aspects of walking, like the height of the stride or the angle of the toes, may require treatment beyond walking in a clinical exoskeleton.

“With the haptic forces generated by the robot, we can only affect certain aspects of locomotion,” Bonato said.

RELATED: Lab-Grown Neurons Could Help Heal Spinal Injuries and Restore Movement

The upshot is, according to the new research, engineers will need to take a much broader approach when designing robot systems that help people walk again. The application of haptic technology — machines that push back and provide calibrated resistance — can only solve part of the problem.

Bonato said that researchers in the rehabilitation community are already exploring solutions that combine exoskeletons with interactive display screens or virtual reality environments. Learning to walk, it turns out, is an incredibly complex process, involving multiple cognitive systems working off visual, aural, and tactile cues.

In short, learning to walk is hard. No wonder toddlers are so cranky all the time.

Read More


Faced with challenges over tightening work visa rules in key overseas markets, India’s IT industry has to reconfigure the business model by increasing offshoring work and hiring locally, a sector expert said.

V Balakrishnan, former chief financial officer, Infosys, also said hiring in the information technology sector would further drop following growing automation.

Tightening visa rules, particularly in the US and the UK – the industry’s two biggest markets – would have short- term impact because it would affect the ability of Indian IT companies to send people on short-term work, he said.

“But I think in the long-term, they (Indian IT firms) have to change the business model to do more offshoring because if you take any project today, around 30 per cent work is done outside India, 70 per cent in India. That ratio can be easily brought to 90:10 where you do less work onsite, more offshore,” Balakrishnan told PTI.

Balakrishnan said hiring would come down in the IT industry.

“Earlier, the IT industry used to hire some 500,000 people (annually), that has come down last year and this year it will come down further,” he said.

On entry-level salaries not growing in the sector in recent years, Balakrishnan said it is a demand-supply issue.

“Too much of supply and demand was less, so IT industry never increased salary at the entry level but going forward they will hire less, so I don’t see entry level salaries going up in India,” he said.

Read More


The biggest cyber attack the world has ever seen is still claiming victims and threatens to create even more havoc on Monday when people return to work.

The attack is a virus that locks people out of their computer files until they pay a ransom to the hackers.

Experts say the spread of the virus had been stymied by a security researcher in the U.K. Hackers have issued new versions of the virus that cyber security organizations are actively trying to counter and stamp out.

The U.K.’s National Cyber Security Centre said Sunday that there have been “no sustained new attacks” of the kind that struck Friday.

But the agency added that some infections may not yet have been detected, and that existing infections can spread within networks.

Wainwright said earlier on British TV that the attack was “unprecedented” in its reach, with more than 200,000 victims in at least 150 countries.

Hospitals, major companies and government offices were among those that were badly affected. Cybersecurity experts have said the majority of the attacks targeted Russia, Ukraine and Taiwan. But U.K. hospitals, Chinese universities and global firms like Fedex (FDX) also reported they had come under assault.

Things to Know about the Attack:

WannaCry has already caused massive disruption around the globe.

Sixteen National Health Service organizations in the U.K. were hit, and some of those hospitals canceled outpatient appointments and told people to avoid emergency departments if possible.

Barts Health, which runs five hospitals in London, said Sunday it was still experiencing disruption to its computer systems and it asked for the public to use other NHS services wherever possible.

In China, the internet security company Qihoo360 issued a “red alert” saying that a large number of colleges and students in the country had been affected by the ransomware, which is also referred to as WannaCrypt. State media reported that digital payment systems at some gas stations were offline, forcing customers to pay cash.

Read More


Introduction:

Simply put, cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics and more—over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you are billed for water or electricity at home.

Still foggy on how cloud computing works and what it is for? This beginner’s guide is designed to demystify basic cloud computing jargon and concepts and quickly bring you up to speed.

Uses of cloud computing:

You are probably using cloud computing right now, even if you don’t realise it. If you use an online service to send email, edit documents, watch movies or TV, listen to music, play games or store pictures and other files, it is likely that cloud computing is making it all possible behind the scenes.

    • Create new apps and services
    • Store, back up and recover data
    • Host websites and blogs
    • Stream audio and video
    • Deliver software on demand
    • Analyse data for patterns and make predictions

Top benefits of cloud computing:

Cloud computing is a big shift from the traditional way businesses think about IT resources. What is it about cloud computing? Why is cloud computing so popular? Here are 6 common reasons organisations are turning to cloud computing services:
1. Cost:

Cloud computing eliminates the capital expense of buying hardware and software and setting up and running on-site datacenters—the racks of servers, the round-the-clock electricity for power and cooling, the IT experts for managing the infrastructure. It adds up fast.

2. Speed:

Most cloud computing services are provided self service and on demand, so even vast amounts of computing resources can be provisioned in minutes, typically with just a few mouse clicks, giving businesses a lot of flexibility and taking the pressure off capacity planning.

3. Global scale:

The benefits of cloud computing services include the ability to scale elastically. In cloud speak, that means delivering the right amount of IT resources—for example, more or less computing power, storage, bandwidth—right when its needed and from the right geographic location.

4. Productivity:

On-site datacenters typically require a lot of “racking and stacking”—hardware set up, software patching and other time-consuming IT management chores. Cloud computing removes the need for many of these tasks, so IT teams can spend time on achieving more important business goals.

5. Performance:

The biggest cloud computing services run on a worldwide network of secure datacenters, which are regularly upgraded to the latest generation of fast and efficient computing hardware. This offers several benefits over a single corporate datacenter, including reduced network latency for applications and greater economies of scale.

6. Reliability:

Cloud computing makes data backup, disaster recovery and business continuity easier and less expensive, because data can be mirrored at multiple redundant sites on the cloud provider’s network.

Read More


Introduction:

Web development is a broad term for the work involved in developing a web site for the Internet (World Wide Web) or an intranet (a private network). Web development can range from developing the simplest static single page of plain text to the most complex web-based internet applications, electronic businesses, and social network services. A more comprehensive list of tasks to which web development commonly refers, may include web engineering, web design, web content development, client liaison, client-side/server-side scripting, web server and network security configuration, and e-commerce development. Among web professionals, “web development” usually refers to the main non-design aspects of building web sites: writing markup and coding. Most recently Web development has come to mean the creation of content management systems or CMS. These CMS can be made from scratch, proprietary or open source. In broad terms the CMS acts as middleware between the database and the user through the browser. A principle
benefit of a CMS is that it allows non-technical people to make changes to their web site without having technical knowledge.
Web design encompasses many different skills and disciplines in the production and maintenance of websites. The different areas of web design include web graphic design; interface design; authoring, including standardised code and proprietary software; user experience design; and search engine optimization. Often many individuals will work in teams covering different aspects of the design process, although some designers will cover them all.[1] The term web design is normally used to describe the design process relating to the front-end (client side) design of a website including writing mark up. Web design partially overlaps web engineering in the broader scope of web development. Web designers are expected to have an awareness of usability and if their role involves creating mark up then they are also expected to be up to date with web accessibility guidelines.

Read More


Introduction:

E-learning theory describes the cognitive science principles of effective multimedia learning using electronic educational technology.
Cognitive research and theory suggest that the selection of appropriate concurrent multimedia modalities may enhance learning,  as may application of several other principle. e-learning is introduced by “John Robrt”.

 

Empirically established principles:

a.Multimedia principle:

Deeper learning is observed when words and relevant graphics are both presented than when words are presented alone (also called the multimedia effect).[27] Simply put, the three most common elements in multimedia presentations are relevant graphics, audio narration, and explanatory text. Combining any two of these three elements works better than using just one or all three.

b.Modality principle:

Deeper learning occurs when graphics are explained by audio narration instead of onscreen text. Exceptions have been observed when learners are familiar with the content, are not native speakers of the narration language, or when only printed words appear on the screen. Generally speaking, audio narration leads to better learning than the same words presented as text on the screen. This is especially true for walking someone through graphics on the screen, and when the material to be learned is complex or the terminology being used is already understood by the student (otherwise see “pre-training”). One exception to this is when the learner will be using the information as a reference and will need to look back to it again and again.

c.Coherence principle: Avoid using unnecessary content (irrelevant video, graphics, music, stories, narration, etc.) in order to minimize cognitive load imposed on memory during learning by irrelevant and possibly distracting content.Basically, the less learners know about the lesson content, the easier it is for them to get distracted by anything shown that is not directly relevant to the lesson. For learners with greater prior knowledge, however, some motivational imagery may increase their interest and learning effectiveness just a bit.

d.Contiguity principle: Keep related pieces of information together. Deeper learning occurs when relevant text (for example, a label) is placed close to graphics or when spoken words and graphics are presented at the same time, or when feedback is presented next to the answer given by the learner.

e.Segmenting principle: Deeper learning occurs when content is broken into small chunks. Break down long lessons into several shorter lessons. Break down long text passages into multiple shorter ones.

f.Signalling principle: The use of visual, auditory, or temporal cues to draw attention to critical elements of the lesson. Common techniques include arrows, circles, highlighting or bolding text, and pausing or vocal emphasis in narration. Ending lesson segments after critical information has been given may also serve as a signalling cue.

 

Read More