We humans are good at deciphering others’ emotional states from the sounds they make. A baby’s laugh tells us instantly that she is happy; similarly, we have no trouble knowing that a dog’s spirited bark is a sound of joy. Indeed, previous studies have shown that we’re adept at distinguishing the barks of lonely, angry, and happy dogs. Are there, then, similar features that we listen for when we hear a baby’s laugh and a dog’s happy bark, or a man’s angry cough and a dog’s growl? To find out, a team of researchers devised an online survey to assess how humans perceive the emotional content of human and dog vocalizations. Thirty-nine people were recruited to take the survey. They listened to randomly played nonverbal sounds, such as a woman cooing, a man snorting, a baby giggling, and a dog growling or barking. The volunteers rated each sound on a positive to negative scale and its emotional intensity. The researchers’ statistical analysis revealed a striking relationship between how the listeners rated the emotional aspect of each sound and its acoustics. Thus, shorter calls—whether human or dog—were regarded as more emotionally positive than longer calls; and higher pitched samples were rated as more emotionally intense than lower pitched sounds for both species, the team reports online today in Biology Letters. By following these same simple rules, they conclude, it may be possible to develop easily recognized artificial emotions in social robots.