Lirtex - Technology on the Edge of Time http://www.lirtex.com Software, Hardware, Robotics, Engineering Thu, 17 Jan 2013 21:45:58 +0000 en-US hourly 1 http://wordpress.org/?v=3.8.3 Microcontroller In Circuit Serial Programming (ICSP) with Microchip PIC and Atmel AVR http://www.lirtex.com/embedded/microcontroller-icsp-in-circuit-programming/ http://www.lirtex.com/embedded/microcontroller-icsp-in-circuit-programming/#comments Fri, 03 Feb 2012 21:48:17 +0000 http://www.lirtex.com/?p=457 In Circuit Serial Programming is a method of directly programming a Microchip PIC or Atmel AVR while in they are connected to a circuit, as opposed to programming the chip ahead, and only then soldering it to a circuit. There are many benefits to ICSP, but also some important design considerations which I will try to highlight.

In circuit programming has many advantages:

  • It shortens the development cycle – it is really annoying to take a chip out of the board each time it needs to be reprogrammed, and it’s even much harder with SMD.
  • Allows customer and in field firmware upgrades
  • Calibrate the system during manufacturing or in the field
  • Assign unique ID \ serial number to product

How to prepare your circuit to work with ICSP?

The programmer uses serial signaling scheme to program the chip in circuit. The signaling is carried through the programming clock (PGC or ICSPCLK) and the programming data (PGD or ICSPDAT) pins. In addition, the MCLR/VPP pin is used as either a high voltage programming signal or an attention indicator to the device.

Wherever application allows, use dedicated pins for ICSP. It will save you much trouble. Not sharing a pin both for ICSP and I/O for example, minimizes the preparation work which needs to be done to allow ICSP.

Often, and especially true with low pin devices, it is not possible to dedicate the 3 needed pins just for ICSP, and when want them to have dual functionality.

In this case:

1. Isolate Vpp from the circuit by using a Schotkey diode and an R/C. NOTE: in some devices, like the PIC12F629, this pin will be driven to about 13 volts by the programmer while programming the device. Make sure whatever is connected to the Vpp pin can sustain this voltage level, or isolate it with an appropriate resistor or a Schotkey diode.

2. Isolate ICSP_Clock and ICSP_Data from the rest of the circuit. The isolation method is application specific, which unfortunately means there is no ready recipe.  Often, a resistive isolation works fine. Recommended resistor values are 1k to 10k.

3. Physically locate the ICSP header as close as possible to the programmed chip, to reduce attenuation.

ICSP Connection Diagram

Common Microchip PIC ICSP layouts

Additional reading and references

  1. Microchip PICKit 2  manual
  2. Microchip In-Circuit Serial Programming™ (ICSP™ ) Guide
  3. ICSP in wikipedia
]]>
http://www.lirtex.com/embedded/microcontroller-icsp-in-circuit-programming/feed/ 0
Fast Object Tracking – Robot Computer Vision http://www.lirtex.com/robotics/fast-object-tracking-robot-computer-vision/ http://www.lirtex.com/robotics/fast-object-tracking-robot-computer-vision/#comments Thu, 16 Sep 2010 01:17:05 +0000 http://www.lirtex.com/?p=173 I wanted my robot to be able to track object and follow them. The first thing I wanted to do is give the robot the ability to follow an object with its head camera. The head camera is mounted on a pan-tilt servo system, and hence is capable of moving left and right, up and down (as seen in the picture below).

My second object tracking goal was to make the robot chase after an object, much like a dog would chase a ball thrown by his owner. This kind of tracking is quite harder – it would use the head camera tracking from the previous step, and combine it with rest of the robot sensors to follow the object.

How?

To achieve that I’m going to use several basic image processing \ computer vision algorithms. I’m going to use the OpenCV library. OpenCV, as it’s name suggests, is an open-source computer-vision library originally

developed by Intel. It is cross-platform (I have used it both on a PC and on the ARM based Beagleboard). OpenCV is fairly easy to use if you have basic knowledge in image-processing.

The first object I wanted to track was a plain colored orange ball.

Color based tracking

Filter only the orange color from the image. To do that, I converted the image to the HSV color-space, and then used the cvInRange filter twice to filter the orange colors.

In the first picture below you can see the video steam converted to the HSV color space, and in the second picture you can see the result of the red color filtering (all colors except red were filtered out).

(python code)

</p>
<p>#declare necessary objects</p>
<p>hsv_frame = cvCreateImage(size, IPL_DEPTH_8U, 3)<br />
thresholded = cvCreateImage(size, IPL_DEPTH_8U, 1)<br />
thresholded2 = cvCreateImage(size, IPL_DEPTH_8U, 1)<br />
hsv_min = cvScalar(0, 50, 170, 0)<br />
hsv_max = cvScalar(10, 180, 256, 0)<br />
hsv_min2 = cvScalar(170, 50, 170, 0)<br />
hsv_max2 = cvScalar(256, 180, 256, 0)</p>
<p># convert to HSV for color matching<br />
# as hue wraps around, we need to match it in 2 parts and OR together<br />
cvCvtColor(frame, hsv_frame, CV_BGR2HSV)<br />
cvInRangeS(hsv_frame, hsv_min, hsv_max, thresholded)<br />
cvInRangeS(hsv_frame, hsv_min2, hsv_max2, thresholded2)<br />
cvOr(thresholded, thresholded2, thresholded)</p>
<p>


Shape based tracking

Then, I used Hough transform to detect the shape of a circle. Before applying the hough transform I smoothed the image because it seems to improve the results.

</p>
<p># pre-smoothing improves Hough detector<br />
cvSmooth(thresholded, thresholded, CV_GAUSSIAN, 9, 9)<br />
circles = cvHoughCircles(thresholded, storage, CV_HOUGH_GRADIENT, 2, thresholded.height/4, 100, 40, 20, 200)</p>
<p>

The result

The result is pretty impressive. The code functions very well and detects the ball under most circumstances (i.e. the ball is far from the camera, close to the camera, slow movement, fast movement, etc).

You can see the result in this video:

http://www.youtube.com/watch?v=CigGvt3DXIw

Full python sources including servo movement

<br />
/*****************************************************************************************<br />
*  Name    : Fast object tracking using the OpenCV library                               *<br />
*  Author  : Lior Chen &lt;chen.lior@gmail.com&gt;                                             *<br />
*  Notice  : Copyright (c) Jun 2010, Lior Chen, All Rights Reserved                      *<br />
*          :                                                                             *<br />
*  Site    : http://www.lirtex.com                                                       *<br />
*  WebPage : http://www.lirtex.com/robotics/fast-object-tracking-robot-computer-vision   *<br />
*          :                                                                             *<br />
*  Version : 1.0                                                                         *<br />
*  Notes   : By default this code will open the first connected camera.                  *<br />
*          : In order to change to another camera, change                                *<br />
*          : CvCapture* capture = cvCaptureFromCAM( 0 ); to 1,2,3, etc.                  *<br />
*          : Also, the code is currently configured to tracking RED objects.             *<br />
*          : This can be changed by changing the hsv_min and hsv_max vectors             *<br />
*          :                                                                             *<br />
*  License : This program is free software: you can redistribute it and/or modify        *<br />
*          : it under the terms of the GNU General Public License as published by        *<br />
*          : the Free Software Foundation, either version 3 of the License, or           *<br />
*          : (at your option) any later version.                                         *<br />
*          :                                                                             *<br />
*          : This program is distributed in the hope that it will be useful,             *<br />
*          : but WITHOUT ANY WARRANTY; without even the implied warranty of              *<br />
*          : MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the               *<br />
*          : GNU General Public License for more details.                                *<br />
*          :                                                                             *<br />
*          : You should have received a copy of the GNU General Public License           *<br />
*          : along with this program.  If not, see &lt;http://www.gnu.org/licenses/&gt;        *<br />
******************************************************************************************/<br />
#!/usr/bin/python<br />
# -*- coding: utf-8 -*-</p>
<p>from opencv.cv import *<br />
from opencv.highgui import *<br />
from threading import Thread<br />
#import serial</p>
<p>class RobotVision:<br />
	cvSize size<br />
	cvImage hsv_frame, thresholded, thresholded2<br />
	cvScalar hsv_min, hsv_max, hsv_min2, hsv_max2<br />
	cvCapture capture;</p>
<p>	def InitBallTracking():<br />
		globals size,  hsv_frame, thresholded, thresholded2, hsv_min, hsv_max, hsv_min2, hsv_max2, capture<br />
		print &quot;Initializing ball Tracking&quot;<br />
		size = cvSize(640, 480)<br />
		hsv_frame = cvCreateImage(size, IPL_DEPTH_8U, 3)<br />
		thresholded = cvCreateImage(size, IPL_DEPTH_8U, 1)<br />
		thresholded2 = cvCreateImage(size, IPL_DEPTH_8U, 1)</p>
<p>		hsv_min = cvScalar(0, 50, 170, 0)<br />
		hsv_max = cvScalar(10, 180, 256, 0)<br />
		hsv_min2 = cvScalar(170, 50, 170, 0)<br />
		hsv_max2 = cvScalar(256, 180, 256, 0)</p>
<p>		storage = cvCreateMemStorage(0)</p>
<p>		# start capturing form webcam<br />
		capture = cvCreateCameraCapture(-1)</p>
<p>		if not capture:<br />
			print &quot;Could not open webcam&quot;<br />
			sys.exit(1)</p>
<p>		#CV windows<br />
		cvNamedWindow( &quot;Camera&quot;, CV_WINDOW_AUTOSIZE );</p>
<p>	def TrackBall(i):<br />
		t = Thread(target=TrackBallThread, args=(i,))<br />
		t.start()</p>
<p>	def TrackBallThread(num_of_balls):<br />
		globals size,  hsv_frame, thresholded, thresholded2, hsv_min, hsv_max, hsv_min2, hsv_max2, capture<br />
		while 1:<br />
			# get a frame from the webcam<br />
			frame = cvQueryFrame(capture)</p>
<p>			if frame is not None:</p>
<p>				# convert to HSV for color matching<br />
				# as hue wraps around, we need to match it in 2 parts and OR together<br />
				cvCvtColor(frame, hsv_frame, CV_BGR2HSV)<br />
				cvInRangeS(hsv_frame, hsv_min, hsv_max, thresholded)<br />
				cvInRangeS(hsv_frame, hsv_min2, hsv_max2, thresholded2)<br />
				cvOr(thresholded, thresholded2, thresholded)</p>
<p>				# pre-smoothing improves Hough detector<br />
				cvSmooth(thresholded, thresholded, CV_GAUSSIAN, 9, 9)<br />
				circles = cvHoughCircles(thresholded, storage, CV_HOUGH_GRADIENT, 2, thresholded.height/4, 100, 40, 20, 200)</p>
<p>				# find largest circle<br />
				maxRadius = 0<br />
				x = 0<br />
				y = 0<br />
				found = False<br />
				for i in range(circles.total):<br />
					circle = circles[i]<br />
					if circle[2] &gt; maxRadius:<br />
						found = True<br />
						maxRadius = circle[2]<br />
						x = circle[0]<br />
						y = circle[1]</p>
<p>				cvShowImage( &quot;Camera&quot;, frame );</p>
<p>				if found:<br />
					print &quot;ball detected at position:&quot;,x, &quot;,&quot;, y, &quot; with radius:&quot;, maxRadius</p>
<p>					if x &gt; 420:<br />
						# need to pan right<br />
						servoPos += 5<br />
						servoPos = min(140, servoPos)<br />
						servo(2, servoPos)<br />
					elif x &lt; 220:<br />
						servoPos -= 5<br />
						servoPos = max(40, servoPos)<br />
						servo(2, servoPos)<br />
					print &quot;servo position:&quot;, servoPos<br />
				else:<br />
					print &quot;no ball&quot;<br />

Sample Sources

This c++ code takes a video stream from an attached video camera, looks for an orange ball inside the stream, and prints the coordinates of the ball.
Three “debug” windows are shown to clarify the process: 1) the video capture. 2) the stream after the conversion to HSV, 3) the stream after conversion to HSV, color-filtering, and Hough transform.

<br />
/*****************************************************************************************<br />
*  Name    : Fast object tracking using the OpenCV library                               *<br />
*  Author  : Lior Chen &lt;chen.lior@gmail.com&gt;                                             *<br />
*  Notice  : Copyright (c) Jun 2010, Lior Chen, All Rights Reserved                      *<br />
*          :                                                                             *<br />
*  Site    : http://www.lirtex.com                                                       *<br />
*  WebPage : http://www.lirtex.com/robotics/fast-object-tracking-robot-computer-vision   *<br />
*          :                                                                             *<br />
*  Version : 1.0                                                                         *<br />
*  Notes   : By default this code will open the first connected camera.                  *<br />
*          : In order to change to another camera, change                                *<br />
*          : CvCapture* capture = cvCaptureFromCAM( 0 ); to 1,2,3, etc.                  *<br />
*          : Also, the code is currently configured to tracking RED objects.             *<br />
*          : This can be changed by changing the hsv_min and hsv_max vectors             *<br />
*          :                                                                             *<br />
*  License : This program is free software: you can redistribute it and/or modify        *<br />
*          : it under the terms of the GNU General Public License as published by        *<br />
*          : the Free Software Foundation, either version 3 of the License, or           *<br />
*          : (at your option) any later version.                                         *<br />
*          :                                                                             *<br />
*          : This program is distributed in the hope that it will be useful,             *<br />
*          : but WITHOUT ANY WARRANTY; without even the implied warranty of              *<br />
*          : MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the               *<br />
*          : GNU General Public License for more details.                                *<br />
*          :                                                                             *<br />
*          : You should have received a copy of the GNU General Public License           *<br />
*          : along with this program.  If not, see &lt;http://www.gnu.org/licenses/&gt;        *<br />
******************************************************************************************/</p>
<p>#include &lt;opencv/cvaux.h&gt;<br />
#include &lt;opencv/highgui.h&gt;<br />
#include &lt;opencv/cxcore.h&gt;<br />
#include &lt;stdio.h&gt;</p>
<p>#include &lt;stdio.h&gt;<br />
#include &lt;stdlib.h&gt;<br />
#include &lt;string.h&gt;<br />
#include &lt;assert.h&gt;<br />
#include &lt;math.h&gt;<br />
#include &lt;float.h&gt;<br />
#include &lt;limits.h&gt;<br />
#include &lt;time.h&gt;<br />
#include &lt;ctype.h&gt;</p>
<p>int main(int argc, char* argv[])<br />
{</p>
<p>    // Default capture size - 640x480<br />
    CvSize size = cvSize(640,480);</p>
<p>    // Open capture device. 0 is /dev/video0, 1 is /dev/video1, etc.<br />
    CvCapture* capture = cvCaptureFromCAM( 0 );<br />
    if( !capture )<br />
    {<br />
            fprintf( stderr, &quot;ERROR: capture is NULL \n&quot; );<br />
            getchar();<br />
            return -1;<br />
    }</p>
<p>    // Create a window in which the captured images will be presented<br />
    cvNamedWindow( &quot;Camera&quot;, CV_WINDOW_AUTOSIZE );<br />
    cvNamedWindow( &quot;HSV&quot;, CV_WINDOW_AUTOSIZE );<br />
    cvNamedWindow( &quot;EdgeDetection&quot;, CV_WINDOW_AUTOSIZE );</p>
<p>    // Detect a red ball<br />
    CvScalar hsv_min = cvScalar(150, 84, 130, 0);<br />
    CvScalar hsv_max = cvScalar(358, 256, 255, 0);</p>
<p>    IplImage *  hsv_frame    = cvCreateImage(size, IPL_DEPTH_8U, 3);<br />
    IplImage*  thresholded   = cvCreateImage(size, IPL_DEPTH_8U, 1);</p>
<p>    while( 1 )<br />
    {<br />
        // Get one frame<br />
        IplImage* frame = cvQueryFrame( capture );<br />
        if( !frame )<br />
        {<br />
                fprintf( stderr, &quot;ERROR: frame is null...\n&quot; );<br />
                getchar();<br />
                break;<br />
        }</p>
<p>        // Covert color space to HSV as it is much easier to filter colors in the HSV color-space.<br />
        cvCvtColor(frame, hsv_frame, CV_BGR2HSV);<br />
        // Filter out colors which are out of range.<br />
        cvInRangeS(hsv_frame, hsv_min, hsv_max, thresholded);</p>
<p>        // Memory for hough circles<br />
        CvMemStorage* storage = cvCreateMemStorage(0);<br />
        // hough detector works better with some smoothing of the image<br />
        cvSmooth( thresholded, thresholded, CV_GAUSSIAN, 9, 9 );<br />
        CvSeq* circles = cvHoughCircles(thresholded, storage, CV_HOUGH_GRADIENT, 2,<br />
                                        thresholded-&gt;height/4, 100, 50, 10, 400);</p>
<p>        for (int i = 0; i &lt; circles-&gt;total; i++)<br />
        {<br />
            float* p = (float*)cvGetSeqElem( circles, i );<br />
            printf(&quot;Ball! x=%f y=%f r=%f\n\r&quot;,p[0],p[1],p[2] );<br />
            cvCircle( frame, cvPoint(cvRound(p[0]),cvRound(p[1])),<br />
                                    3, CV_RGB(0,255,0), -1, 8, 0 );<br />
            cvCircle( frame, cvPoint(cvRound(p[0]),cvRound(p[1])),<br />
                                    cvRound(p[2]), CV_RGB(255,0,0), 3, 8, 0 );<br />
        }</p>
<p>        cvShowImage( &quot;Camera&quot;, frame ); // Original stream with detected ball overlay<br />
        cvShowImage( &quot;HSV&quot;, hsv_frame); // Original stream in the HSV color space<br />
        cvShowImage( &quot;After Color Filtering&quot;, thresholded ); // The stream after color filtering</p>
<p>        cvReleaseMemStorage(&amp;storage);</p>
<p>        // Do not release the frame!</p>
<p>        //If ESC key pressed, Key=0x10001B under OpenCV 0.9.7(linux version),<br />
        //remove higher bits using AND operator<br />
        if( (cvWaitKey(10) &amp; 255) == 27 ) break;<br />
    }</p>
<p>     // Release the capture device housekeeping<br />
     cvReleaseCapture( &amp;capture );<br />
     cvDestroyWindow( &quot;mywindow&quot; );<br />
     return 0;<br />
   }<br />
]]>
http://www.lirtex.com/robotics/fast-object-tracking-robot-computer-vision/feed/ 51
Linux Robotic Platform – an Intelligent Robot http://www.lirtex.com/robotics/linux-robotic-platform-an-intelligent-robot/ http://www.lirtex.com/robotics/linux-robotic-platform-an-intelligent-robot/#comments Mon, 05 Jul 2010 10:18:19 +0000 http://www.lirtex.com/wp/?p=59 Intellibot - an Intelligent Robot

I have always wanted to experiment with robotics, and lately I’ve found the time to build an “intelligent”, open-source robotic platform.

The platform runs embedded debian linux, and includes the following main capabilities:

1. Computer Vision (imitation of the human vision. The robot sees and “understands” what it sees). For this I have extensively used the OpenCV project.

2. Speech Synthesis (imitation of human speech. The ability to speak). For this I have used Espeak.

3. Speech Recognition (the ability to understand vocal commands). For this I have used CMU Sphinx 4 (after modifying some of the files in the project)

Just a teaser, before I upload more videos:

http://www.youtube.com/watch?v=okt-2_VxGfc

List of Computer Vision capabilities:

1. Line Following

2. Object Tracking

3. Facial Recognition (uses face API from face.com)

List of Speech Synthesis Capabilities:

1. Speaking the current action (“Moving Forward” etc)

2. Saying “Hi <name>” when it recognizes a face

Current list of Speech Recognition Capabilities:

1. Full movement control (“Move Forward\Backward”, “Turn Left\Right”, “Stop”)

2. Initialization of algorithms: “Follow Line\RedBall”, “Find Face”

Robot Diagram

 

]]>
http://www.lirtex.com/robotics/linux-robotic-platform-an-intelligent-robot/feed/ 6
Automatic Caller Identifier for Maemo (Nokia Linux OS) http://www.lirtex.com/embedded/automatic-caller-identifier-for-maemo-nokia-linux-os/ http://www.lirtex.com/embedded/automatic-caller-identifier-for-maemo-nokia-linux-os/#comments Tue, 01 Jun 2010 20:43:43 +0000 http://www.lirtex.com/?p=284 Nokia N900

Automatic Caller Identification uses several sites on the internet to find the identification of unknown incoming calls, and display it while receiving such calls.

The project is developed for Maemo, Nokia’s embedded Linux operating system.

For years I have waited for having my own phone that would run Linux. A phone I will be able to use just like any other system, stacked with a full programming tool-chain, ssh access, a decent packaging system, and my favorite set of application. Finally, that day has come. Nokia N900 is a fully based Linux based phone which runs Maemo – an SDK and a software platform which is based on my long loved Debian Linux distribution.

This post will describe the process of developing an Maemo application for the Nokia N900. I will use my Automatic Caller Identification program as an example.

Automatic Caller Identification?

Simply put, an automatic caller identification system is a system that can identify unknown numbers (i.e. phone numbers which are not currently stored in your phone book). Since the N900 has a built in internet connection (either Wifi or 3G), I wanted to create a program that once an unidentified call is received, will use several sites in the internet to identify the caller and display its details. Since I live in Israel I first experienced with Israeli numbers, but the code is modular enough to extend it to other countries \ internet databases as well. The database I used was http://441il.com/.

The development Process

Development Environment

Maemo SDK

Live example

http://www.youtube.com/watch?v=viuhEDIjAIA

Links and Source-code

https://garage.maemo.org/projects/caller-id/

]]>
http://www.lirtex.com/embedded/automatic-caller-identifier-for-maemo-nokia-linux-os/feed/ 0
Building a Custom Debian Kernel for the BeagleBoard http://www.lirtex.com/embedded/building-a-custom-debian-kernel-for-the-beagleboard/ http://www.lirtex.com/embedded/building-a-custom-debian-kernel-for-the-beagleboard/#comments Sun, 24 May 2009 11:49:26 +0000 http://www.lirtex.com/wp/?p=96 While trying to get my WIFI dongle to work with the BeagleBoard, I noticed that the dongle’s module was not compiled into the kernel. So I had to build a new kernel from scratch. Since building a new kernel under the BeagleBoard will take a LOT of time, I decided to cross-compile a kernel for the ARM architecture under my AMD-64 arch PC.

The following steps describes building a custom Debian kernel for the BeagleBoard using cross-compiling:

Installing required dependencies

Install proper build environment:

apt-get install git-core kernel-package fakeroot build-essential \
curl libncurses-dev uboot-mkimage

Edit /etc/apt/sources.list and add the Embedded Debian Project sources:

#debian embedded
deb http://www.emdebian.org/debian/ unstable main

Now execute:

apt-get update
apt-get install cpp-4.3-arm-linux-gnu  g++-4.3-arm-linux-gnu gcc-4.3-arm-linux-gnu

You now have a build environment capable of compiling a kernel for the ARM platform. Next step is to acquire and compile the kernel

Acquiring and Compiling the Kernel

Retrieve the GIT checkout:

git clone git://git2.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap-2.6.git
cd linux-omap-2.6/
git checkout 58cf2f1 -b v2.6.29-58cf2f1
git archive --format=tar --prefix=v2.6.29-58cf2f1/ v2.6.29-58cf2f1 | gzip > ../v2.6.29-58cf2f1.tar.gz
git checkout master
git branch v2.6.29-58cf2f1 -D
cd ..

Download kernel diffs and kernel config:

wget http://rcn-ee.homeip.net:81/dl/omap/beagle/v2.6.29-58cf2f1-oer34/v2.6.29-58cf2f1-oer34.diff
wget http://rcn-ee.homeip.net:81/dl/omap/beagle/v2.6.29-58cf2f1-oer34/defconfig

Extract Kernel Source

tar -xf v2.6.29-58cf2f1.tar.gz
cd v2.6.29-58cf2f1/

Apply Patch

patch -p1 < ../v2.6.29-58cf2f1-oer34.diff

Copy Defconfig

cp ../defconfig .config

Configure the kernel (requires libncurses5-dev installed)

make menuconfig

Build, Cross-Compiling:

make CROSS_COMPILE=arm-linux-gnu- uImage

A few moments later, you can find your new kernel in the ‘arch/arm/boot/’ directory.

Make modules:

make CROSS_COMPILE=arm-linux-gnu- modules

make CROSS_COMPILE=arm-linux-gnu- modules_install

make CROSS_COMPILE=arm-linux-gnu- modules_install INSTALL_MOD_PATH=arch/arm/boot”

Congratulations, a new kernel is compiled! Now move uImage and the arch/arm/boot directory to the sdcard.

Next step is to boot and test my WIFI dongle again.

You can find some more information about building a Debian kernel here.

]]>
http://www.lirtex.com/embedded/building-a-custom-debian-kernel-for-the-beagleboard/feed/ 1