This is such a hard thing to define, like I said in the other thread, but I'll try to be more elaborate here.
Probably the first thing about humans and animals is the urge for self preservation. Its not even a thought, its an impulse. Can this be given to robots, and furthermore, should it? If a robot goes haywire for whatever reasons, its fairly important for it to be disabled, as a robot is automatically going to be stronger and faster and have access to more information instantly than humans have. It could potentially prove too dangerous, but on the other hand, that's not so different from humanity. Likely robots would be programmed similarly to how they are in Isaac Asimov's books: unable to do harm to a human. But what if someone were to circumvent that, and use robots for high profile assassinations. No fingerprints, no trace, and the robot would be disabled immediately afterwards. Kind of a creepy thought. Not sure how I got to that from self preservation... But in such a scenario, the robot would need humanlike emotions, which turn it agaisnt murder, or petty crime, or whatnot.
What would a jail for robots be like? Would a junkyard be the equivalent of the death penalty? Would robots be substituted for cheap labor? That seems the obvious role in society for them, but it also seems kind of like the scare resulting in computerization: people afraid of losing their jobs to machines.
Lots of tangents there, I'll try to get back on track...
Can a robot learn things from experiences? Sure, more data can be inputed, but would they have to be told that fire is hot, or could they learn that from inserting a figner into a candle flame? And further, could they interpret that as harmful? Learning from our experiences is what makes us human. That's definitely a barrier that would have to be overcome in artificial intelligence.
Another is creativity. Could a robot be programmed to create masterpeices of art, of writing, of scientific invention? The ability to see things in new ways and to be creative is such a huge portion of being human, that without it, robots will never be beyond 'artificial'.
Then what about the ability to fail? A robot is typically designed to be more efficient than a human, but failing and making mistakes and changing from them is again, so classically human. Maybe its not such a good thing, always, that we fail, but with robots that cannot do so, there's always going to be a sort of superiority and inferiority thing, though whether robots will be seen as superior or the inferior I cannot really say.
Well, that's what I can think of right now, if I come up with more, as I surely will as soon as I hit post, I'll be sure to add it.