torsdag den 23. september 2010

week 4 in LEGO lab

Date: 23. September
Duration of activity: 3 hours
Groups members participating: Michael Vilhelmsen, Heine Stokholm og Mads Møller Jensen

Dagens opgaver:



Robotten:


Vores robot Thor har i dag taget følgende outfit på:


Den har kun to hjul som den skal prøve at balancere på, og for at hjælpe med denne opgave er der anbragt en lightsensor foran på Thor. 


Programmet:
Programmet består af to hoveddele. En getBalance() og en pidControl() metode, hvor getBalance() sørger for at få valgt det rette offset. 
Her har vi prøvet fire forskellige tilgange:

  1. At forsøge at få robotten til at balancere og derefter trykke på START-knappen. (Som var initialiseringen)
  2. At forsøge at få robotten til at balancere mens START-knappen var trykket i bund, så offset blev sat i det øjeblik START-knappen blev sluppet. -Dette var instruktorens idé, men vi havde problemer med at fornemme hvornår robotten var i balance på den måde.
  3. Vente 5 sekunder efter programmet var startet og så sætte offset til den værdi sensoren målte på det tidspunkt. På den måde kunne vi slippe for at skulle sende robotten ud af balance ved at trykke på START-knappen.
  4. Ved at hardcode offset. Vi lavede målinger med robotten for at finde den sensorværdi, hvor robotten var i balance.

pidControl() metoden udgør derimod kontrolenheden i vores feedbackloop. Den fungerer som sagt ved hjælp af et pid system der indeholder tre vigtige parametre, p, i og d. P står for proportional og det er den parameter der bestemmer hvor hurtigt robotten skal bevæge sig mod offsettet i forhold til hvor langt væk fra den ønskede position robotten er. I står for integral og sørger for at vores robot ikke kommer til at hvile i en position der ikke er vores offset, og d står for derivative og sørger for at vores robot sænker farten efterhånden som den nærmer sig den ønskede position.


afvikling:

En af vores første forgæves forsøg på at få Thor til at balance ser ud som følger:




Først prøvede vi at forbedre balanceevnen ved at ændre på den magiske error konstant. Det gjorde dog ikke den store forskel. 
Vores næste tanke var at det måske kunne hjælpe at ændre på motorhastigheden. Vi prøvede at sætte hastigheden ned først for at undgå at robotten ville ramme forbi det satte offset, men igen uden meget succes og højere hastighed hjalp heller ikke meget. 
Vi havde også nogle forsøg med at hardcode det offset som robotten skulle prøve at ramme. Det gav dog visse problemer da det optimale offset ændrede sig ofte på grund af  at vores sensor ikke var alt for sikkert fastmonteret.

Det forsøg vi havde størst held med var da vi efter et hint fra instruktoren flyttede vores robot til et mere ensfarvet bord, og med lidt ekstra tweak af pid parametrene endte vi med følgende resultat:


torsdag den 16. september 2010

Week 3 in LEGO lab

Date: 16. September
Duration of activity: 3 hours
Groups members participating: Heine Stokholm and Mads Møller Jensen

Goals of the day:

  • Mount the soundsensor to our 9797 LEGO car, and test the sensor
  • Use a datalogger
  • Reconstruct the car into soundcontrolled car
  • Try to make the car controlled by claps


The sound sensor:
We have mounted the soundsensor (microphone) to the car and uploaded our MicSensorTest.java to it. The program recorded the input from the sensor, and wrote the value (between 0-100) to the LCD display.
The MicSensorTest program was just a rewriting of the SonicSensorTest we used last week to test the Sonic Sensor.
http://www.cs.au.dk/~mmjensen/legolab/MicSensorTest.java

No problems or surprises here, as we saw the display changing in accordance with the amplitude of the sounds we made.

Datalogger:
Our second assigment of the day was to use a datalogger on our LEGO car. So we uploaded SoundSampling.java to do so. After running the program we could connect to our car and get access to a log file. This file contained a lot of measurements from the microphone. Now we had the possibillity to log a specific sound and see the amplitude as a function of time.
Here is our result for a clap:
In the log file it was very easy to see where the clap was, so this is just a little piece of the entire log file. The program wrote to the log file every 5th ms, so even though we only logged a few seconds the log file was huge.

Sound controlled car:
After this we uploaded the SoundCtrCar.java file to our car. The program listened to sound and made the car go forward, go left, go right and stop in that sequence. If the microphone measured a sound over 90 it did the next thing the sequence and looping the whole thing. The problem with the original program was that the only time it was possible to hit the exit button and make the program quit was at the end of the loop.
To change this we had two approaches.
First approach: We implemented that the program should listen to the exit button every time it listened to the input from the microphone. This gave made us able to quit the program all the time. This can be seen here:

http://www.cs.au.dk/~mmjensen/legolab/SoundCtrCar.java

Second approach: We made the program implement the ButtonListener interface and this approach had the same effect as the first.

http://www.cs.au.dk/~mmjensen/legolab/SoundCtrCarListener.java

Clap controlled car:
Then came the fun part of the day: we tried to change the SoundCtrCar to only register claps. We used Sivan Toledo definition of claps and it are as follows:

     "A clap is a pattern that starts with a low-amplitude sample (say below 50), followed by a very-high amplitude sample (say above 85) within 25 milliseconds, and then returns back to low (below 50) within another 250 milliseconds."

This definition corresponds very good with the graph we created above.

We changed the waitForLoudSound() method from the SoundCtrCar to register claps instead like this:


private static  void waitForLoudSound() throws Exception
    {
  int clap = 0; 
        int soundLevel;
        Thread.sleep(500);
        do
        {
   if(Button.ESCAPE.isPressed()){
    System.exit(0);
   }
   soundLevel = sound.readValue();
            LCD.drawInt(soundLevel,4,10,0);
   if (clap == 0 && soundLevel < 50){
    clap = 1;
    Thread.sleep(25);
   } else if(clap == 1 && soundLevel > 85){
    clap = 2;
    Thread.sleep(250);
   } else if(clap == 2 && soundLevel <50){
    clap = 3;
   } else {
    clap = 0;
   }
        }
        while ( clap < 3 );
    }
 
We basically wait the time stated in the definition and then check if the next requirement is true. As you can see in the video below it works reasonably well but there are problems. It would probably be better to loop until the next requirement is true and restart if the time goes over the time specified in the definition.

The entire code can be seen here:
http://www.cs.au.dk/~mmjensen/legolab/ClapCtrCar.java

 

mandag den 13. september 2010

Week 2 in LEGO Lab

Date: 9/9-2010
Duration of activity: 3 hours
Group members participating: Michael Vilhelmsen, Mads Møller, Heine Stokholm

  • The goal was to investigate the NXT ultrasonic sensor and use it to build and program a wall follower robot.
  • The plan was: Modify out existing robot (equip it with the NXT ultrasonic sensor), upload the  SonicSensorTest.java program, modify the sample interval, examine the limitations of the NXT ultrasonic sensor, upload the Tracker.java program, edit the constants in the Tracker.java and observe the effect and finally upload the WallFollower program and study the control algorithm.
  • The result were as follows:






    • We equipped our robot with the NXT ultrasonic sensor seen below:












    • We uploaded the SonicSensorTest program. The sample interval was initially set to 300 msec. We changed this value to 10 msec, observed that the sampling happened faster, but the display did not update every 10 msec, which was hardly surprising. The update was probably happening every 100 msec, and realisticly, so did the sampling.
    • We aimed the NXT ultrasonic sensor at a wall and observed the distance displayed on the display. We moved the sensor further and further away from the wall, trying to get it to output 254 on the display. This never happened, the highest reading we could possibly get was 179. See the conclusion for a discussion about this.
    • We uploaded the Tracker program, placed the robot approx. 50 cm. from a wall and ran the program. We observed that the robot approached the wall, then backed up a bit, and then settled for a rhytmic forward-backward motion, but never came to a halt. We tweaked the constants in the program, and the results will be discussed in the conclusion. but lets just say that the only time our robot actually came to a halt, was when the battery died...
    • Lastly, we uploaded the WallFollower program, and tried to make it follow a line, including taking corners. We spent the rest of time messing around with this. The results can the seen in the conclusion.
  • The conclusions are as follows;
    • The SonicSensorTest never displayed a value higher than 179. This was most likely caused by the fact that the ultrasounds the NXT ultrasonic sensor sends out, are sent out, not in flat waves, but in a cone. The cone may thus have hit either the ground or the ceiling or some other surface, and bounced back, before the "middle" of the cone hit whatever was in front of it. The speed of sound could also have been a limiting factor, but at 343 m/sec, and the NXT ultrasonic sensor having a supposed maximum distance of 2.55 meters, this seems unlikely. Even with the sound having to travel back to the NXT ultrasonic senso, the maximum distance is 2.55x2 = 3.10 meters, which then takes 3.10 / 343 = 0.009 seconds to travel.
      • The Tracker program uses a Proportional-Derivative control. At first, we used the program as it was, and saw that the robot did indeed move faster towards the wall, when it was further away from it. It never came to a halt though. We tweaked around with the constants, and the most notable effect was when we set the minimum speed to 100%. The result was much like figure 5.13 in our book.
      • WallFollower -
    All in all this lab session was pretty easily done, and the results were hardly surprising to anyone.

    mandag den 6. september 2010

    Week 1 in LEGO Lab

    Date: 2/9-2010
    Duration of activity: 2 hours
    Group members participating: Michael Vilhelmsen, Heine Stokholm

    • The goal for this first lab session, was to get familiar with the leJOS Java System, and use it to compile and upload a Java program to the NXT. And to build a robot of course!
    • The plan was simple: Build a robot, install leJOS on the NXT, upload a program, mess around with the light sensor, and finally study the memory usage of the NXT
    • The result were as follows:



      • We downloaded and installed leJOS without any complications.
      • We built a robot according to specifications. The result was a robot much like this one (except ours had claws!)


     

      • We uploaded the Java program found here
      • The program was executed from the NXT and the robot behaved as was expected - it followed a black line on the floor.
      • Then we studied how the light sensor used in constructing the robot works. The Java program had a feature that allowed us to read the light percent values that the light sensor registered. Using this, we made a table of the different values, and noticed the difference when the light sensor had it's diode turned on and off. The values can be seen below



      • Finally, we modified the Java program to display the memory usage of the NXT. We then introduced a small change to the way strings were output to the display, and saw how this changed the memory usage.
      • The conclusion is a 3 part one;
        • By studying the Java program, it was apparent how the robot was able to follow a black line. It was obvious why it had to be a black line, and it was (as expected) a very simplistic implementation. But we tested the robot several times, and it was quite capable of following the black line.
        • When we studied how the light senser worked, we saw that it measured the amount of light returned from the surface it was reading from. Thus it makes sense, that when it is reading from a black surface, the value is lower, since black absorbs most of the light (and thus reflects less). The values in the table are consistent with this conclusion. We saw that with the diode turned off, all values became lower, as would be expected. The only value that remained the same, was the sunlight (where we pointed the sensor at a window towards the sunny sky) which makes sense.
        • For the memory usage, we first saw how the NXT filled up it's memory, and then freed it all during garbage collection. The change we made to the Java program was to make the program use memory in an inefficient way, and we saw that the memory did indeed fill up faster.
      All in all this lab session was pretty easily done, and the results were hardly surprising to anyone.