Wednesday 24 April 2013

Robotic Hugs!

I think there are enough comments on the page that I only need to share the link:

http://designtaxi.com/news/357002/In-Japan-A-Robotic-Coat-That-Hugs-You-Like-A-Real-Girlfriend/#.UWbaEj5mrNk.twitter

Still, I can't help having so many questions:
1- Could technology ever replace the social interaction and relationships between humans in the physical world?
2- Could technology become a damaging factor to our social skills and abilities creating a variety of handicaps?
3- Could we have guidelines to ensure that technology enable and enhance our social interactions with minimum side effects?
4- Would the physical world ever diminish and be redefined as a result of technological advances esp in augmented reality?

I am sure there are more questions like these that will require philosophers, moralists, law makers, psychologists, sociologists and others, to be answered.

Tuesday 25 December 2012

Robots and weight loss!


IEEE produces a newsletter for robotics research and some of the items are quiet interesting. Many will find this one, I think, to be serious fun!

http://spectrum.ieee.org/automaton/robotics/home-robots/autom-robot-that-helps-you-lose-weight

Wednesday 8 July 2009

Is vlog the next big thing after social networks?

When the like of MySpace and Facebook came into existence, social networking was a new buzz word and potential technological and commercial venture. Some even hailed the new technology as the new definition of socialism for the 21st century.
Nowadays, social networking sites are like corner shops springing everywhere. As they replicate each other, some decided to specialize to create their own niche market whilst others forced into it. For example, as a computer scientist I find LinkedIn to be must useful. However, I doubt a pop artist would do, instead Bebo would provide a good outlet for the artist to reach potential young audience on the social network whose users' average age is 16 or so it seems.

Bebo is an example of age group oriented social network. Others, like Google's Orkut, Friendster, and Netlog, either lack orientation or becoming more of regional or specific activities oriented social networks. The point is that social networking sites became part of the social activities like clubs and social gatherings. One cannot help but to wander where is the next explosion of this technology based socializing?

If we compare what happened in the cyber world with what happened in the physical world, one expects pictures to follow text, which had happened already, and as technology becomes available, video to follow pictures. That may be where is out next stop. This thought came to me as I was surveying video sharing sites.

Youtube is the first one comes to mind, with its flexibility, Google's backing, first in the market, and so on. But is there potential competitors? Looking at Blip.tv, the variety of questions and issues to be asked and answered in this field jumped out. First, vlogging will never kill social network sites or textual blogging, so the best thing is to integrate that. Second, vlogging has elements of music showcasing that appear strongly on MySpace and Last.FM so an ability to showcase is very important. As a result, Youtube single channel needs to change; Blip.tv advertising income share strengthen the idea of showcasing similar to Last.FM; these are examples.

The vlogging and vodcasting may be the next socially burgeoning technology as technology becoming accessible to users and benefits become clearer to commercial users in particular. However, it remains unclear if there will be similar growth in services providers in this area, will remain limited to handful of providers, or will be incorporated service within existing social networking sites with vlogging or vodcasting as another tab on the menu? One decisive factor in this is availability of technology to vlogging providers, i.e. codecs, streaming, and ontological tagging.

Wednesday 18 February 2009

Maya 2008 changes

Anyone who uses Maya, Unreal, and similar large software, whose development is fast pace, will find each version is different from it's previous one. Moreover, tutorials that come with the software, let alone hundreds available online, are often out of date and lack behind. As Maya concerned, here are few changes we found, in the labs I run, when using Maya 2008 from main tutorials often used.

Whenever 'Edit Polygons' menu are mentioned you can change that to 'Edit Mesh' menu. Similarly, whenever 'Split' is required you can use 'Detach Components' in the 'Edit Mesh' menu. As for 'Extrude' there is a menu entry for it under 'Edit Mesh'.

There are obviously more changes in menu names but those we have found so far. My advice to my students is always to look at what the tutorial tells them to do and not necessary at how the tutorial tells them to do it. This may be of help to those working with Maya and other fast paced software where tutorials do not necessary keep up with the changes in the software.

Wednesday 4 February 2009

From Swarm Sensors to Emotional Modeling

I will be giving a talk at University of Lincoln on 5th Feb 2009 in which I will be combining two projects. They may seem to be unrelated. Here is the abstract with some related publications.
==============================

The talk will cover briefly two main research areas. The first part of the talk will look at current work undertaking to use swarm intelligence in optimizing sensor networks. The work considers static and mobile sensors. The mobile sensors case is the more interesting and difficult since it implies dynamic topology. We show how GA and PSO can be applied to optimize energy consumption in the sensor network and to maintain and re-organize the topology as sensors move.

The mobile sensors could be viewed as a crowd of people (e.g. carrying mobile telephones and participating in social networking). However, human swarms, unlike other swarms, are influenced in their behaviour by many factors such as perception, emotions and social relations. The second part of the talk will focus on emotions modeling. Emotions modeling reported in literature had often relied on threshold representation of emotions. In our work we start from psychological theories of emotion and develop computational models. This part of the talk will give a brief summary of the work done so far covering Darwinian / Ekman basic emotions, Millenson 3D model of emotions and Scherer wheel of emotions.



Some related publications
1. Blewitt , W.F., Ayesh, A.: Modeling the Emotional State of an Agent through Fuzzy Logic With Reference to the Geneva Emotion Wheel In: Bertelle, C., Ayesh, A. (eds.): European Simulation and Modelling (ESM'2008) Conference. EUROSIS, Le Havre, France (2008) 279-283
2. Blewitt, W., Ayesh, A., John, R.I., Coupland, S.: A Millenson-based approach to emotion modelling. Human System Interactions, 2008 Conference on (2008) 491-496
3. Al-Obaidy, M., Ayesh, A.: Optimizing Autonomous Mobile Sensors Network using PSO Algorithms. International Conference on Computer Engineering and Systems (ICCES'08). IEEE, Cairo, Egypt (2008)
4. Al-Obaidy, M., Ayesh, A.: Energy Efficient PSO-based Algorithm for Optimizing Autonomous Wireless Sensor Network In: Bertelle, C., Ayesh, A. (eds.): European Simulation and Modelling (ESM'2008) Conference. EUROSIS, Le Havre, France (2008) 201-206
5. Al-Hudhud, G., Ayesh, A.: Real Time Movement Coordination Technique Based on Flocking Behaviour for Multiple Mobile Robots System. Swarm Intelligence Algorithms and Applications Symposium (SIAAS'08) - AISB 2008 Convention, Vol. 11, Aberdeen, Scotland (2008) 31-37
6. Ayesh, A., Stokes, J., Edwards, R.: Fuzzy Individual Model (FIM) for Realistic Crowd Simulation: Preliminary Results. Fuzzy Systems Conference, 2007. FUZZ-IEEE 2007. IEEE International. IEEE, London (2007) 1-5
7. Ayesh, A.: Emotionally Motivated Reinforcement Learning Based Controller. IEEE SMC 2004, Vol. 1, The Hague, The Netherlands (2004) 874-878 vol. 871
8. Ayesh, A.: Perception and Emotion Based Reasoning: A Connectionist Approach. Informatica 27 (2003) 119-126

Monday 8 December 2008

Does the Turing Test really tests intelligence?

The Turning Test has a solid stand in the Artificial Intelligence research community as the ultimate test for intelligent machines. That solid stand may be the result of historical reasons. Alan Turing visionary paper and predictions make the Turing Test at the heart of any discussion on machine intelligence. Perhaps because there is no machine can pass that test without cheating. A full intelligent machine cannot pass the test but a well programmed machine within a time limit can fool a human examiner and pass the test mechanism but not in its spirit. There may be other reasons but there are few, it seems, who would question the test itself. The declared aim of the Turing Test is to test intelligence and to provide a benchmark by which we can tell a machine is intelligent. But does it do that?

Let us examine the Turing Test closely. The test requires that a human examiner to have a conversation with two unseen entities. One of these entities is a human whilst the other is the machine to be tested. There is an agreed time limit of 5 minutes but that is often argued against. The key here is that the human could not tell which is the machine and which is the human through the conversation. Now, one of the conversation topics that we can trap a pretending machine is the weather.

Assume we asked how is the weather outside and we got some of the following for an answer:

* It is 21 degrees with northerly wind at speed of 5 knots
* It is a lovely weather today [do not you like it sunny?] (the actual weather outside is heavy rain)
* I do not like the weather in England, how do you cope with it?

Now which one do we think is a human answer and which one is a machine's? The first answer gives an impression of a machine with good weather sensors but could not be a human with a weather station who is mentally lazy and read it as it is? The last one could be a human who is foreigner to England but could not be a machine who just has a preset answers to divert the topics in directions in which it can converse? The answer in the middle is the interesting one. The first part of the answer, which can be a genuine answer by an intelligent being be it a human or a machine, gives the impression of a machine. However, when the optional part is added, which could again be a preset answer for a machine, it gives the sense of cynicism that one likely to connect with a human rather than with pattern matching machine. These cases show the flaws in the Turing Test argument and return us to the question, what does it test?

For an ultimate intelligent machine to pass the test, the machine has to be able to pretend to be human. This requires that the machine is conscious of itself that it is a machine. It is conscious of the fact that the test requires it to come cross as human. It is conscious of time and visual limitation. And finally it is conscious of what makes a human comes cross as human, i.e. all non-intelligent human quirkiness. After all we would be much quick to accept a robot to be intelligent if it can hold a conversation with a good laugh about football!

In my opinion, Turing Test does not test intelligence, or at least not solely so. It tests consciousness, self-awareness, and the ability to lie. The last is the most important because the ability to lie is distinctively a human characteristic associated with our ability to create from imagination.

Our complex cognition makes it difficult for us to distinguish between awareness, consciousness, thinking, intelligence, and recognition of cognitive processes. The latest is a good example of the complexity and the level of interweaving of our abilities. When we remember an experiences we recognize that we remembered after the memory have been recalled; but that recognition in itself make us aware of the process of memorizing; this often leads us to analyze the memory, the memorization process and the reasons why it was triggered; in other words, we become conscious of our existence in time, the existence of the memories associated with an experience and the stimulus that triggered these memories. This leaves us wonder where is intelligence in all of this and how can we quantify it for measurement?

The thoughts provoked by this article is not completely new. Similar notions of wonderment has been expressed over the Turing test and some attempts are being made to find a quantifiable test of intelligence. The advances in cognitive systems make the need to such test, or even better metrics, the greater and more urgent. Many of these alternatives, however, fail because of their focus on one element of intelligence or cognition, often focusing on learning and rational deduction. In most cases, intelligence is the result of integration of abilities, simple they may be, but together demonstrate the various facades of cognition and intelligence. For example, survival is an important ability but not necessary rational; social relations are important element of thinking but may not lead to rational decisions, e.g. parents staying with their children in a burning building.

Integrative (artificial) intelligence would require quantifiable metrics by itself measuring the different factors in ratios proportioned to their impact on behaviour. For example, learning can be form of categorization, but categorization is in itself can be form of thinking and decision making, though it may lead to stereo type based perception. Equally, categorization can be viewed as a form of memory organization to enable associative memorization. Thus, learning, thinking, memory, perception are all necessary in defining intelligence. In addition, embodiment is as important. Studies in animal intelligence gave us and could give us more insight in the separation between intelligence and the other aspects of mind and indeed of being a human.

These are some thoughts on what is needed in building metrics to test intelligent systems that are more coherent, unconfused, and measurable to truly test intelligence; but this is not an attempt by any means to set such metrics for doing so comprehensive discussion between psychologists, sociologists, AI researchers, neurologists and philosophers is needed to extract the components of intelligence from the mesh of the human mind and identify their weights in defining an intelligent being (be it a machine!).

A good reading list on the Turing list and associated topics can be found at: http://www.aisb.org.uk/publicunderstanding/turing_test.shtml

Monday 1 December 2008

Cognition, Serious Gaming, and Behaviourism: The Emotional Dimension

Speaker: Dr. Aladdin Ayesh, De Montfort University
Place: IOCT, DMU
Date and Time: 2pm, 1st December 2008

Abstract:
Video games have advanced greatly in the last few years. Their advance and great popularity can be contributed to a number of reasons. Some of these reasons are very obvious such as the advances of hardware both in increased capability and reduced cost. Another important reason is graphics and visualisation. But there are other reasons that are less noticeable but as important if not more so. Cognitive studies application in developing games can make huge difference and allow a graphic inferior platform such as Wii by pass far superior platforms such as Playstation in terms of overall experience and user attraction.
Applying cognitive studies to develop interfaces is not particularly new. However, games gave a new dimension because of new set of user requirements and expectations. Thus, the cognition we are talking about here relates strongly to the behaviour of the platform and the games played. It also relates to means of interaction and user presence within the game that creates an attachment. In this case, emotions play a big role in developing platforms, games scenarios, games avatars and so on.
Emotion modelling is attracting more researchers in the last two years. There are still only few formal models in computing whilst a great literature in psychology and sociology. In this talk we will look at the current developments in emotion modelling, emotion-based inference, emotion expression and classification. A particular attention will be paid to behaviourist theories of emotions, which are often used in developing the computational models. Serious gaming, which is the use of game technology in developing serious applications such as simulators and training suites (e.g. some companies setup presence and deliver training and advertising services through Second Life), will be used as a context to show the importance of computational models of emotions in an era of parallel living avatars and domestic robots.
The talk will draw on existing projects. One particular project relates to crowd management, especially during disasters and emergencies (e.g. war zones). There are soldiers/police avatars, who are clearly identified, but then there are men, women and children within the crowd. Within that crowd there are also troublemakers. Each one of these characters has different levels of emotions derived from their motivations and perception of the situation and their social relations to others within the crowd. Now, a player or a trainee can become a part of this virtual world as an avatar. It will be shown how through using emotions modelling and social relations rules the avatars can exhibit panic and curiosity as an emergent property similar to what we may observe in the real world.
The talk will conclude with the launch of www.computational-emotions.org web site, which aims to provide a point of reference to researchers in this area and the basis of a new book by the speaker.


If you would like to book a place at this event, please contact Lisa McNicoll on lmcnicoll@dmu.ac.uk