torsdag 30 januari 2014

Praising Intelligence might Undermine Development

Yes, you read that right. Praising intelligence may not evolve the thinking ability, but rather have quite the opposite effect, possibly halting it all together. This startling discovery was recently publicized by Claudia M. Mueller, Carol S. Dweck from Columbia University and Sheri R. Levy, Stanford University.

The study compares the performance of children after they’ve been praised for their intelligence (“you must be smart”) versus their effort (“you must have put a lot of effort into that”). The kids were asked to perform a task, and upon completion they were praised in one of the two manners. After this they where then given a series of harder problems to try to solve.

The kids who where praised for their effort increased in the number of problems they solved. However, those who where told that they were smart showed a frightening decrease.

What’s the difference? Those being told that they’re smart are taught that abilities are fixed. If you’re finding a puzzle hard, then it must be because you’re not very good at that, and you should try something else. Kids taught that results are based on effort see the hard puzzles as something they can learn to do, and then they do so.

In a second set of tests, kids were asked what type of puzzles they like to solve, and then given four options. The first three were pretty similar: “problems that aren’t too hard, so I don’t get many wrong,” “problems that are pretty easy, so I’ll do well,” and “problems that I’m pretty good at, so I can show that I’m smart.” 
However the last option is different: “problems that I’ll learn a lot from, even if I won’t look so smart.”

The first three options are “performance” options, they’re all about looking good, or getting things right. The last option is a “learning” option, and it values knowledge and growth over performance.
The study concluded that children often selected puzzle-options based upon the type of praise they had previously gotten. Children praised for their intelligence choose tasks which are easy and make them look good, but children praised for their effort choose tasks which are hard, and which will teach them more.

Do you think about praise and feedback, and how you direct it? Comment below...

torsdag 23 januari 2014

Marketing Technology Landscape Rewritten (again)

Scott Brinker at the Chief Marketing Technologist Blog recently released an updated version of the widely circulated landscape of Marketing Technology. The 2014 version has a few updates but also plenty of omissions. 

It comes with one main caveat: this graphic is not comprehensive. It is just a sample, albeit a large one, of the many different kinds of software available to marketers today. There are many more companies — indeed, entire categories — that were not included, merely due to the constraints of time and space.

And by the time you read this, it will inevitably be out of date due to new launches, re-launches, expansions, exits, and mergers. The pace of change in this field is breathtaking.

Click the image for a larger pic.

tisdag 21 januari 2014

The Conscious Internet - Part VIII: Our Digital Mary Shelley

A while back a wrote a short article titled "the Conscious Internet" concerning the development of AI and computer technology in regards to the Internet. The article is written with a very philosophical approach to the subject, but handles real life facts. It has long been my intention to publish it here on the blog, but I just haven't gotten around to doing so. Until now ...

Here's part 8 of 8. You can find the previous chapter here. Happy reading, and please comment below.


The reality that the Internet today is such an intricate part of society that it would be almost impossible to revert back to a time without our digital connections, can hardly be contested. The total amount of information being sent through the ether each day is staggering and ever increasing.

Exactly which virtual straw will break the camel’s back is quite unclear, but what can be agreed upon is that it begins with massive amounts of shared data.

This occurrence is currently referred to as “Big Data”; the massive amount of unstructured, unorganized, and thereby unsearchable (“ungoogleable”) data that is today populating the Internet. Estimates place this type of data at about 90 % of all information, and it is only getting bigger. It consists mostly of social media, but also includes other data-generating interactions such as call-center conversations, TV footage, mobile phone calls, iMessaging, website clicks, etc.

The impacts of Big Data also seem impossible to predict. Game developers today create games which center on social interactions, and the ability to play and share gaming experiences with your friends online. As a result more and more games demand constant connectivity to even boot up a game, something that always results in trouble at launch day. 

When launching Diablo III in 2012, Blizzard Studios tried anticipating the amount of users logging on to play the game for the first time, keep in mind that this was probably the biggest release that year, so the statistical data provided to build servers capable of handling the onslaught was not hard to find. Still they failed. 

The servers were down for days, and “Error 33” (meaning the server is unreachable) was forever carved in Blizzard history. They history repeated itself a year later with the launch of SimCity 5, which again had players disappointedly waiting for a server connection. 

Regardless of our knowledge of the Internet, it seems as though we will never again be fully aware of what goes on within its digital boarders. In theory there could already be a primitive cognitive being in the net, a phantom invisibly surfing the wires in between servers. Our lack of knowledge combined with the speed of which the Internet is growing, would provide the perfect veil for which to hide behind.

In the race between mother board and mother brain the human intellect is currently in the driver’s seat. Our illogical complexity it seems is still guarding the key to cognition, but the grip may be slipping. However, a cognitive digital entity, in spite of SkyNet’s best foreshadowing, does not have to be a threat to society. It could rather turn out to be an invaluable asset for our human development. 

This artificial intelligence would instantly sense our mood if we had a bad day, and turn on an appropriate musical tune or TV show to cheer us up. It would provide moral support when faced with a difficult question, and laugh with us when amused. It would ease our everyday life and relieve both stress and workload. 

Research shows that the points in human history where health and safety have sky-rocked coincide perfectly with spikes in technological evolution. The introduction of the steam engine drastically reduced the work-load placed on the individual employee, resulting in a revolution in increased welfare for the overall human population. 

This innovation infinitely multiplied the power of our muscles, and helped us overcome the limitations of our own bodies. We now stand on the brink of another revolution as we are slowly overcoming the limitations of our intellect, outsourcing intelligence to the computers. 

The creation of a SkyNet is almost a certainty; we will before long have an interconnected, all-knowing entity governing the processes we live and function by, but rather than destroy us, maybe it will help us grow into the next step of human evolution.



Thank you for reading taking time to read my short article, hope you enjoyed it! What do you think our Internet future holds for us? Leave you comment below!

torsdag 16 januari 2014

RoboEarth is up to Speed


In the midst of my series on the possible future of the Internet, an article from BBC falls in my lap, that really brings reality up to speed. As it turns out parts of my article about the future of robots are very much present.

Scientists at the Eindhoven University, the Netherlands, have created the first ever World Wide Web for robots. Sadly not dubbed "SkyNet" these doom bringers decided to call the system "RoboEarth". Playing straight into the fate of the Termintor series they did however decide to set off the robots by run a hospital room. No better way to take over the earth than to start by killing our the weak and discrepant.

I am exaggerating naturally, but I do find the timing comical. The four robots currently working in the mocked-up (I might ad) hospital are designed to serve drinks, dish out pills, or alert to emergencies. At the core of it all is one central system controlling all the robots.

As I discussed previously, a central problem with artificial intelligence is the inability to come up with new solutions to new problems. A computer will be bound to the strict confinements of its programming, resulting in an inability to learn “new” things. This specific problem is something they are now trying to circumvent by having the central computer learn everything. The aim of the system is to create a kind of ever-changing common brain for robots.

"The problem right now is that robots are often developed specifically for one task," said Rene van de Molengraft, the RoboEarth project leader. "Everyday changes that happen all the time in our environment make all the programmed actions unusable. A task like opening a box of pills can be shared on RoboEarth, so other robots can also do it without having to be programmed for that specific type of box."

The system is cloud-based, which in turn means that a lot of the memory capacity can be off-loaded from the individual robots to the central core, allowing for much smaller storage space, and faster processing. A single robot would simply download the script for solving the problem at hand, and then delete said script when the task is completed.

Using this approach, robots are becoming increasingly cheaper to manufacture, something that may result in us having servant robots in our homes in as little as 10 years, experts say. All controlled by the central hive-mind.

I hate to be an “I told you” so but - World, I told you so.

______
What do you think? Are robots taking over the world, and if so are they going to let us have a place? Find out my thoughts in the upcoming conclusion of my series “the Conscious Internet”. And as always; please comment below.

tisdag 14 januari 2014

The Conscious Internet - Part VII: Flying over the Cuckoo's Nest





A while back a wrote a short article titled "the Conscious Internet" concerning the development of AI and computer technology in regards to the Internet. The article is written with a very philosophical approach to the subject, but handles real life facts. It has long been my intention to publish it here on the blog, but I just haven't gotten around to doing so. Until now ...

Here's part 7 of 8. You can find the previous chapter here. Happy reading, and please comment below.

The Internet has grown exponentially over the last couple over years, especially with the introduction of mobile devices. Mobile devices overtook computers as the medium of choice for accessing the Internet worldwide during 2013, and there is today more apparatuses connected to the Internet than there are people on this planet. The number surpassed 10 billion in 2012, outnumbering the current human population of about 7 billion. 

Every minute that passes on the Internet 2 million searches are performed on Google, 600 new homepages are published, 100 000 new tweets are sent, and more than 48 hours of new media is uploaded to YouTube. In fact, every day more than 11 000 years of video is watched on YouTube and that number is growing. 

In 2013, every day 2.9 quintillion bytes of data (1 followed by 18 zeros) are created, with 90% of the world’s data created in the last two years alone. As a society, we’re producing and capturing more data each day than was seen by everyone since the beginning of the earth. To put things in perspective, the entire works of William Shakespeare, as it would be written down in a text document; represent about 5 MB of data. So, you could store about 1 000 copies of Shakespeare on a single DVD. This vast amount data produced every day would create a stack of DVDs reaching from the Earth to moon - twice. 

Obviously we are creating more data than is humanly possible to grasp, and as we are doing so the gap between creating data, and understanding that data, grows just as quickly. Creating content does not require any in depth programming knowledge no more, and the development in the field of interaction design is rather taking us in the opposite direction toward more intuitive and more easily understood interfaces.

This will in turn result in only a selected few possessing the front edge knowledge needed to understand the full entity that is the Internet, and sometimes not even these geniuses will full understand what is happening. In the end of the nineties a new looming menace threatened to strike at humanity; the Y2K bug. 

Computer experts around the world collectively announced that due to a design faux pas in coding the internal motherboard clocks for the world’s computers, there would be a substantial risk all computers would malfunction at midnight of December 31 1999. When writing the code, programmers had only used two digits to store the yearly number instead of four (99 instead of 1999), which would result in all the worlds computers at strike of midnight hitting a full row of zeros for both date and time (00 00 00 – 00:00). 

Coincidentally this is what the motherboard would show if the computer was blank, before it had been programmed to do anything, which is why the experts feared that resetting the clock may result in the same outcome; the computer could interpret this as a “kill switch” and automatically blank all its memory.

The public panic spread like wildfire. Elevators and airplanes where going to plummet to the ground. Ships would run ashore. The electrical grid would be shut down, and with it the pumps controlling the fresh water supply. People started stock-piling everything from water, kindle, and canned goods to gas-masks, guns, and diesel powered generators; anything you would possibly be need to survive the upcoming Armageddon. 

Others believed they could be spared through Y2K-insurances, and paid programming humbugs smaller fortunes to perform laptop-exorcism. But nothing was certain, and so, as the clock crept closer and closer to the fatal stroke of midnight the world held its collective breath. In retrospect, the ignorance displayed may have been amusing, but it proves a how little we really know about our own creations.

... ends in Part VIII: Our Digital Mary Shelley

måndag 13 januari 2014

The Conscious Internet - Part VI: The Bright Side of Life

A while back a wrote a short article titled "the Conscious Internet" concerning the development of AI and computer technology in regards to the Internet. The article is written with a very philosophical approach to the subject, but handles real life facts. It has long been my intention to publish it here on the blog, but I just haven't gotten around to doing so. Until now ...

Here's part 6 of 8. You can find the previous chapter here. Happy reading, and please comment below.


But the real question remains unanswered still; how much intelligence is to be considered intelligent? Looking at the animal kingdom scientific views begin to differ quickly. Some biologists argue that intelligence can be found with all living organisms, while others only recognize the human intelligence.

One accepted definition of intelligence is the ability to draw conclusions to once environment and adapt accordingly, without any previous knowledge. In other words, distinguishing between educated and being intelligent. The two building blocks for this intelligence has been said to be once cognition and the survival instinct. 

A cognitive being will defend itself if attacked in order to ensure the survival of itself, and in the long run also its species. Ponder for a moment the idea of putting the same type of thinking into a program. A simple line of code would make the program copy itself whenever someone tried to delete it, thus escaping doom. Such a built in defense mechanism would in its perfect form create an eternal program, impossible to wipe out, without making it particularly intelligent. 

The answer to this question may not be to establish links between cognition and computers, but rather to introduce computers to cognition. Several movies toy with the idea of being constantly connected, where we are all living out human life on the net. This raises the question of where the cognitive being really exists. Is its existence tied to the physical body, or is it connected to the realm where it perceives its reality? 

Descartes, who probably would be history´s greatest mind when it comes to questions concerning existence, reasoned that mind always ruled body. He would probably argue that existence would be perceived as where the mind would be present, in other words; living (in) the dream.

... continues in Part VII: Flying over the Cuckoo's Nest

torsdag 2 januari 2014

The Conscious Internet - Part V: Deciphering the Logic





A while back a wrote a short article titled "the Conscious Internet" concerning the development of AI and computer technology in regards to the Internet. The article is written with a very philosophical approach to the subject, but handles real life facts. It has long been my intention to publish it here on the blog, but I just haven't gotten around to doing so. Until now ...

Here's part 5 of 8. You can find the previous chapter here. Happy reading, and please comment below.


Deciphering the Logic
When discussing cognition and artificial intelligence, one does not get far before encountering one of two major theses regulating the subjects; the Turing Test and the Chinese Room Theory. Computer experts around the world usually adhere to one of the two models, spending countless hours trying to prove its principles. 

The Turing Test was formulated in 1950 by the British mathematician Alan M. Turing, as a way of proving or disproving whether a machine was intelligent or not. The test places three different people; a man, a woman, and a interrogator, in separate rooms, with no contact with each other except for a link through which they can send messages - for instance via using a computer. The interrogator then asks the other two questions in order to determine which recipient is who. 

At an unknown point during the test, one of the two being interviewed is replaced by a very clever machine, programmed to answer the interrogator’s questions in the best of human behavior. The question Turing was asking himself was if a machine could be so smart enough that the interrogator would not be able to tell that a switch has been made.

Turing pointed out that the test alone was not created to, nor was it even suitable for, determining whether a program was intelligent or not. He saw the possibilities of a brilliant machine failing the test if its own intelligence differed too much from human intelligence. Or how an ingeniously programmed, but in essence fairly dumb machine, would be able to trick the interrogator into thinking it is still talking to a human counter-part. 

So far no machine has been able to pass the test. The best programs last for about a minute until they finally trip over themselves and repeat a sentence word by word. The program does not only need to be as quick and witty as its flesh and blood counterpart, it also needs to be as dumb and irregular. Some programs have also failed due to the fact that they were too quick, and gave back to accurate information, making the interrogator suspicious. Artificial intelligence also have to balance artificial unpredictability, and artificial stupidity, in order to be truly intelligent.

The Chinese Room Theory formulated in 1980 by John Searle, a professor in philosophy at UC Berkeley (University of California - Berkley), is based on almost the same thinking as the Turing Test. One test subject, who doesn’t know Chinese, is placed alone in a room with only a Chinese instructions manual to keep him company. Messages in Chinese would then be sent into the room, and the test subject is then asked to answer the message using the manual. 

Every message sent into the room would have a corresponding answer, given by the manual, which the test subject then would pass on. Searle argued that regardless of which messages was sent in and out of the room, the person decoding would never gain any knowledge of what was relayed simply because he did not know Chinese. 

The relationship with artificial intelligence is here quite obvious. A computer does not understand the code that is being fed into it, nor does it know how to decipher it, the computer simply knows how to react to the programming. It will therefor never, as a cognitive being would, be able to reserve itself against code it did not like, regardless of how illogical it may be. The computer would in other words jump of the bridge, if only it was programmed to do so.

... continues in Part VI: The Bright Side of Life