Thursday, May 1, 2008

Little pitchers have big ears...

Raggzz, the cairn terrier we had growing up, wasn't necessarily the smartest dog on the planet. She wasn't dumb, but she wasn't above average. Well, she was the cutest darn cairn terrier ever born, but that had nothing to do with her brain power. However, despite a complete lack of training she managed to learn what “eat”, “walk”, and “bath” meant. In fact, we even switched the words around a bit and started spelling eat and bath and saying “promenade” instead of “walk” and she still figured it out. It was basically a kind of Pavlovian or classically conditioned learning – she always heard those words in conjunction with those events so hearing them made her anticipate the event. We learned to be careful in how we used those terms in every day conversation. Otherwise we risked unleashing the full range of terrier emotions about whatever it was she thought we had said about her.

Yesterday I was driving along, listening to NPR as I usually do. (Yes, I'm a liberal crunchy granola girl, if you haven't already figured that out!) Bug was in the back seat. She'd rather be listening to music, but I can only take so much of Elmo and Ken Lonnquist, no matter how talented they are, especially if I'm driving along a tedious route as I was yesterday. The NPR discussion was about Barack Obama and Reverand Wright and I was only listening with half an ear. Then I hear a little voice from the back seat. “O. Bam. O... Obama! Bah-rack...Barack Obama!!!” She was sounding out the unusual name just like she tries to sound out words. And she was pretty darn proud of herself for sounding like the radio host.

While Bug doesn't currently have an extreme association with the name Barack Obama, like Raggzz did with her key words, it's still pretty amazing that she was able to pick out that name, a name she's heard so much on the radio, out of all the political mumbo jumbo. And it also reminds me that I'd better keep my *&@# mouth shut when I'm driving or my child will pick up more than a liberal education!

Why?

When Cousteau was about a year old, he decided to venture into the open door of our 110 year old house's cellar. That's not so surprising, but was amazing is that with all of the stuff down there for him to get into – hoses, gardening equipment, paint supplies, litter box, packing materials, etc. - he chose to find a chunk of rat poison laid down by the house's previous owners (something we were completely unaware of). How did he know to find the most toxic substance there? It must be something hardwired into snotty puppies. $800 and 3 vet visits later he was fine.

Bug has had many painting projects. She's used tempra, water color, and acrylic purchased from the kid craft aisle. So why, oh why, does she decide to taste the acrylic paint used for outdoor terra cotta projects?! The stuff that doesn't say “non toxic” or “safe for children”. No, instead she's sucking on a paintbrush full of the paint who's label saying NOTHING about toxicity. She's never had the urge to taste paint before. I blame the Labrador. Thankfully, the child is also fine.

Wednesday, March 12, 2008

Figthing Fears

I have a toddler and a collie – I know a little bit about dealing with fears and phobias that pop up at random and I think I'm getting pretty good at working through these situations effectively.

Conditioned Emotional Response or CER – classical conditioning forming an emotional reaction between a conditioned stimulus and an unconditioned stimulus. We often deal with fearful responses and CERs are very strong and hard to extinguish.

Poor Beamish must have been trained on a shock collar before I got him. The mere sound of the warning tone before he'd get a burst of citronella from a scent collar or on the electronic fence, or even the warning tone on a cross walk signal, turned him into a frozen, quivering mass. There was no shock associated with the tone the 3 years he was with me, but the fear was there whenever he heard the sound.

Bug had no particular feelings about my cousin until my cousin watched Bug during Bug's seperation anxiety phase. Now Bug always associates my cousin with me leaving and says she doesn't like her, even though they have a lot of fun when they are together and even if I'm not going anywhere.


Counterconditioning – changing the student's association of a conditioned stimulus to an opposite association.

One of Beamish's triggers was the presence of a sheltie. Whenever he'd see a sheltie he would get very stiff and start showing aggressive body language. If the sheltie got close enough, Beamish would lunge at it. We happened to be staying in a place where we saw a couple of shelties frequently. Beamish and I would sit off to the side and every time he noticed a sheltie I would click and give him a treat. By the end of that trip he was doing a pretty good job of looking at the sheltie calmly and turning to me for a treat. We had begun to counter his negative response toward shelties with a more positive, or at least neutral, response.

My aunt happened to watch Bug several times during her separation anxiety phases. She knew that Bug had a strong negative response to my leaving, so she brought ice cream with her whenever she came to watch Bug. Soon Bug was telling me “bye bye” because she knew she wasn't getting “i teem” until momma left.


Desensitization – introducing small doses of the fear-provoking stimulus and gradually working up to exposure to the entire stimulus.

Cousteau did not enjoy our 19 hour drive from Massachusetts to Wisconsin. (Neither did we!) He refused to get into the car for weeks after we moved, probably because he was afraid he'd be stuck in the back of a packed car with us screaming at him to lie down or he'd get tangled in his seat belt during rush hour traffic or some doG awful road construction again. We started throwing treats in the car and letting him get back out. Then we took short trips to fun places. After a few months he was able to get back into the car on his own without complaint and even tolerates road trips well now.

For some reason, Bug was terrified of the vacuum. From the very first time I turned it one she freaked out. We started vacuuming in a room as far away as we could get from her and worked closer one room at a time as she allowed us to. We also introduced a toy vacuum and a smaller, quieter electric sweeper. Over time she could happily play with her toy and remain in the room with the electric sweeper. Then she moved up to using the sweeper herself. Now she can tolerate being on the same floor, a room away from the “big Daddy vacuum”.


Flooding or response prevention – a barrage of the conditioned stimulus without the unconditioned stimulus present. In other words, the presence of the big scary thing in huge amounts.

I didn't realize I was flooding with Lacey, one of my foster dogs, and I'm lucky it worked. She was afraid of big, tall men with deep voices. My dad happens to fit that category. We started off with some counterconditioning, but then I needed to run into my grandmother's house and dogs aren't allowed there. So I tossed Lacey's leash to my dad and went in. Lacey began freaking out, but eventually settled down. I don't think this would have worked if I hadn't started with the counterconditioning, though.

I can't think of a time when I've used this with Bug. It just isn't a very pleasant way to deal with a CER. I've tried it a bit with my fear of snakes by forcing myself to go into the herpitarium at the zoo and look at the snakes surrounding me. Even though the snakes are behind glass, the longer I am there, the more anxious I feel. Maybe if I sat in that room for hours and hours I would become so exhausted by the constant anxiety that I wouldn't be able to shake and hyperventilate, but I'd be just as terrified the next time Bug and my husband wanted to see the snakes.

Schedules of Reinforcement or When and Why Does the Good Stuff Happen

Schedules of Reinforcement...I hate these guys. Just give a dog a treat and be done with it! But it's not that simple and different schedules are more effective for certain things, so here goes my take on them.


Schedule of Reinforcement – “a program or rule that determines how and when a response will be followed by a reward.” The schedule has an effect on how the response is learned and how it is maintained. Use a different schedule for learning than for maintaining.


Continuous Reinforcement Schedule or CRF - there is a reinforcement each time the response is observed. This is used when the student is first learning the skill.

Havana has had a hard time bringing the ball to me from the flyball box. At this point she always gets her tug if she brings me the ball, no matter what else happened on the run.

Bug is in the early stages of potty training. Every time she uses the potty she gets lavishly praised and choses some kind of reward (mint, tell daddy, call Nana, etc.).


Partial Reinforcement Schedule or PRF – also called an intermittent reinforcement schedule. Certain responses are reinforced, not all. The reinforcement is offered on a ratio or at intervals. Good for maintaining all-or-nothing behaviors.


  • fixed ratio FR – a set number between the number of responses and the number or rewards

    I actually don't use this one very much for Bug or my dogs. It doesn't work as well for me as other schedules. Hypothetically:

Cousteau is on a FR-4 schedule (4 responses before his reward). If he gives me four good sits I
will reward him after the 4
th sit.

A silly game Bug and I could play is where she'd hit my palm and I'd wait for 3 slaps before I
grab her hand. This would be a FR – 3.

  • variable ratio VR – the number of responses between each reinforcement changes from one time to the next. Also known as the “slot machine schedule” since this is what makes those one armed bandits so reinforcing for some people. Good for maintaining all-or-nothing behaviors.

When Havana works on heeling she may be reinforced after four steps one time, nine steps the next time, and five steps the time after that. This would be a VR – 6 because on average she has to give six responses before getting reinforced.

I try not to fall into this schedule, but I'm sure I do sometimes and I just can't think of a time. What I see with other children, especially in the grocery store, is a child asking, whining, demanding something over and over again. Sometimes it takes 5 requests and the parent gives permission just for some peace. Other times it might take 20 requests or on a rough day the parent may give in after one or two requests. This would be a VR – 9 or on average, this imaginary child gets what s/he wants after every ninth request.


  • random ratio RR – there is no correlation between the the behavior and the reinforcement. It just happens, like Fate.

      Cousteau walks by the pop corn popper and a piece falls out as his feet.

      Bug sits on the couch and Havana comes over to sniff her face for no apparent reason (this is very reinforcing for Bug who just wants the animals by her.


  • fixed interval FI – the response is reinforced only after a certain amount of time has passed.

    When working on proofing Cousteau's sit stay, he is only reinforced for holding position for at least 60 seconds. He may have to hold for longer than 60 seconds, but he will not be reinforced before 60 seconds has passed.

Bug LOVES her vitamins. (Or Bite-A-MUNS) She asks for them many times a day, but can only
have them in the morning. It does no good for her to ask at lunch time, after her nap, and during
dinner because she only gets that vitamin after she says please in the mornings.


  • variable interval VI -­ reinforcement occurs after a varied amount of time passes.

      Cousteau likes to run agility, but he has to wait quietly for his turn. He may have to wait quietly for three fast dogs to run, for one dog needing a lot of coaching to run, or he may be on the course back to back if we don't want to reset jump heights.

Bug likes to look at the pictures on the LOLCats website. If she asks 10 minutes after we first
looked at the pictures, there may not be any new ones. If she asks 20 minutes after we first
looked there may be two new ones.


Differential Reinforcement Schedule – the quickness of the response determines if the reinforcement.

Havana is learning impulse control as well as stimulus control. I will walk around a room with a toy and ask her to “sit” or “down” at random. If she slowly goes into position I tell her “uh uh” and walk on. If she gets into position quickly I tell her “get it” and we tug.

Even though Bug is capable of walking up and down stairs on her own, she'd rather have me with her. If I say I'm going upstairs and she tells me to wait, but doesn't show any indication of coming by me, I go upstairs on my own. If instead she puts down her toy and runs over to me we walk upstairs together.


Differential Reinforcement of Incompatible Behaviors or DRI – rewarding responses that cannot be done at the same time as an unwanted behavior. Also called alternative response training or countercommanding.

Havana was the most mature dog in her beginning agility class. We had a problem with loose dogs so I taught her if a loose dog came to her that she should look at me. If she was looking at me she wasn't face to face with a hyper, adolescent dog.

When I have a lot of dishes to do I bring a stool for Bug to stand on at the sink. She gets bubbly water and a few plastic dishes to wash while I do the rest of the dishes. If she's washing dishes with me she's not whining about wanting my attention or doing things to get my attention.



Differential Reinforcement of Other Behaviors or DRO – is what I call “hey, at least it's not anything bad.” You pick one target problem behavior in the student and reward anything that isn't that behavior.

My first dog walking client was a nightmare on leash. One day I got smart and as long as she wasn't pulling, I clicked and treated her. She may have been sniffing, peeing on something, looking around, whatever, it didn't matter because she wasn't pulling me.

I do the same thing with Bug. When she's having a tough day and she's been whining and getting into lots of trouble I make sure to tell her how much I like everything that isn't driving me insane. If she's been screaming in the car at the top of her lungs and she starts talking to her giraffe, I'll tell her how much I like hearing her talk with Geti. Then if she starts singing I'll tell her I enjoy that. I don't care what she's doing just so long as it's not screaming in my ear from the backseat.

Thursday, March 6, 2008

Odds and Ends of Learning Theory Vocabulary

Premack's Theory of Reinforcement – something good can reinforce something not as good. If you do a certain thing you have the chance to do something else that you really like. This has the possibility of increasing the likelihood of the desired behavior without having to use a standard food reward, although food can be used.

My insane black Lab, Beamish, had the hardest time learning “stay”. I was very frustrated, both with the lack of progress and with him bowling me over as we would head up stairs. One day I told him to “stay” at the bottom of the stairs and he actually did it while I went up about 4 stairs. Then I released him and he tore up the stairs. I started asking him for longer and longer stays with running up the stairs as a reward. He wound up with a fabulous stay.

This evening I brought out a plastic recorder for Bug to play. She was being pushy and grabby so I told her to sit on a pillow and wait for me to call her over for it. When she finally managed to sit in one place, I told her to come over and get the recorder.


Extinction – when no reinforcement is given for a response, the response is no longer offered. Be aware of extinction bursts or periods of response, often very intense.

See http://dogtrainersbaby.blogspot.com/2007_04_22_archive.html for examples of extinction and extinction bursts.


Capturing behavior – waiting until the student does the behavior and reinforce it right away. (Works well with very active students who offer a lot of behaviors.)

I wanted to put Beamish's jumping up on cue. I stuck a clicker and treats in my pocket and walked into the house. Beamish jumped on me and I reinforced him for it.

My mom realized that Bug hadn't pooped all morning. After lunch she put Bug on the potty and low and behold, there was poop. She was heavily praised and got extra stories. (And probably candy, but she didn't mention that part.)


Shaping by successive approximations – the student is reinforced for small steps leading up to the finished behavior. (Great for fearful or aggressive dogs – or kids I guess.)

In heeling Havana was first rewarded for sitting at my left and looking at me. Then she was reinforced only if she took a step when I did. Then reinforced for taking two steps, then three, etc. until she was able to walk a straight line across the room in heel position.

Bug started off building towers by lining up blocks. Seeing how impressed we were was reinforcing, so she started putting one block on top of the other and oohs and aahs ensued. Then she figured out the towers make a great crash if they're taller and she's gradually developed her skill to be able to build towers 15-20 blocks high.


Prompting or luring – using physical placement, manipulating the environment, or a treat to encourage movement toward the desired behavior. (A quick way to get the behavior.)

To teach Havana to finish to the right (sit in front of me and then move around to my right, swing behind me and get into heel position on my left) I put a treat in front of her face and let her follow it around behind me until she was in heel position. Then I clicked her and let her have the treat once she was in position.

I'm not proud of this, but sometimes a Mom's gotta do what a Mom's gotta do. Sometimes when Bug is being very difficult about leaving a room – like her Daddy's office when he's trying to have a conference call with work – I take her beloved giraffe and walk out of the room with it. As much as she loves Daddy, she has to have her giraffe at all times so she follows me. As soon as she's out of the office and the door is closed she gets her giraffe.


Back Chaining – breaking a behavior down into its individual parts and then the last part of the behavior is trained first so that it becomes rewarding in and of itself. (Uses Premack.) Then the second to last behavior is trained, third to last, etc.

We have a flyball tournament this week so I've got those examples in my mind. The very first thing I did when I got Havana was teach her to tug, largely because in flyball I want her to come back to the tug. Then we set her at the box and had her jump from the box to me. From there we taught her how to operate the box and then to go from the start line over the jumps to the box. Each old behavior was a reward for the newly learned behavior.

Potty training is also very much at the forefront of my mind. Bug loves playing in the water in the sink. From there I let her “help” flush the toilet, then put toilet paper in to watch it swirl when she flushes. All of this before she ever even sat on the potty. We're now at the stage where she doesn't get to flush or wash her hands unless she uses the potty (as opposed to just sitting there and asking me to read her stories over and over).