Skip to content

Commit 610ec4b

Browse files
Brandyn A. Whiteqdot
authored andcommitted
Cleaned up the demos to reflect decisions made from the forum/irc discussions.
1. Abstracted the frame conversion code to frame_convert.py. This will prevent the massive changes that we have been seeing as all of the duplicative code is in there now. This makes optimization and normalization experiments cleaner to test out. 2. Removed demo_ipython and demo_kill_async as they are mostly duplicates of the other demos 3. Made the "multi" demo default to using all kinects at once instead of one at a time 4. Change the default normalization to make better use of the 8 bit range. Signed-off-by: Brandyn A. White <[email protected]>
1 parent 5fff205 commit 610ec4b

File tree

10 files changed

+136
-88
lines changed

10 files changed

+136
-88
lines changed

wrappers/python/README

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,17 +12,17 @@ Install
1212
- Global Install: sudo python setup.py install
1313
- Local Directory Install: python setup.py build_ext --inplace
1414

15-
Why do the demos truncate the depth?
16-
The depth is 11 bits, if you want to display it as an 8 bit gray image you need to lose information somewhere. The truncation allows you to differentiate between local depth differences but doesn't give you the absolute depth due to ambiguities; however, normalization gives you the absolute depth differences between pixels but you will lose resolution due to the difference between high and low depths. We feel that this truncation produces the best results visually as a demo while being simple. See glview for an example of using colors to extend the range.
15+
Why is frame_convert.py there? Why not just use 1 file?
16+
We had individual file demos and when we started experimenting with optimization and normalization it made maintaning the duplicative code a nightmare. Now we have this separate file so that we can keep those changes abstracted.
1717

1818
Do I need to call sync_stop when the program ends?
1919
No, it is not necessary.
2020

2121
Do you need to run everything with root?
2222
No. Use the udev drivers available in the project main directory.
2323

24-
Why does sync_multi call sync_stop after each kinect?
25-
The goal is to test multiple kinects, but some machines don't have the USB bandwidth for it. By default, this only lets one run at a time so that you can have many kinects on a hub or a slow laptop. You can comment out the line if your machine can handle it.
24+
Why does sync_multi have trouble with multiple kinects?
25+
The goal is to test multiple kinects, but some machines don't have the USB bandwidth for it. By default, this lets them all run, however if you uncomment the sync_stop line it only lets one run at a time so that you can have many kinects on a hub or a slow laptop.
2626

2727
Differences From C Library
2828
Things that are intentially different to be more Pythonic

wrappers/python/demo_cv_async.py

Lines changed: 5 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
#!/usr/bin/env python
22
import freenect
33
import cv
4-
import numpy as np
4+
import frame_convert
55

66
cv.NamedWindow('Depth')
77
cv.NamedWindow('RGB')
@@ -10,33 +10,23 @@
1010

1111
def display_depth(dev, data, timestamp):
1212
global keep_running
13-
data = data.astype(np.uint8)
14-
image = cv.CreateImageHeader((data.shape[1], data.shape[0]),
15-
cv.IPL_DEPTH_8U,
16-
1)
17-
cv.SetData(image, data.tostring(),
18-
data.dtype.itemsize * data.shape[1])
19-
cv.ShowImage('Depth', image)
13+
cv.ShowImage('Depth', frame_convert.pretty_depth_cv(data))
2014
if cv.WaitKey(10) == 27:
2115
keep_running = False
2216

2317

2418
def display_rgb(dev, data, timestamp):
2519
global keep_running
26-
image = cv.CreateImageHeader((data.shape[1], data.shape[0]),
27-
cv.IPL_DEPTH_8U,
28-
3)
29-
# Note: We swap from RGB to BGR here
30-
cv.SetData(image, data[:, :, ::-1].tostring(),
31-
data.dtype.itemsize * 3 * data.shape[1])
32-
cv.ShowImage('RGB', image)
20+
cv.ShowImage('RGB', frame_convert.video_cv(data))
3321
if cv.WaitKey(10) == 27:
3422
keep_running = False
3523

3624

3725
def body(*args):
3826
if not keep_running:
3927
raise freenect.Kill
28+
29+
4030
print('Press ESC in window to stop')
4131
freenect.runloop(depth=display_depth,
4232
video=display_rgb,

wrappers/python/demo_cv_sync.py

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,23 @@
11
#!/usr/bin/env python
22
import freenect
33
import cv
4-
import numpy as np
4+
import frame_convert
55

66
cv.NamedWindow('Depth')
77
cv.NamedWindow('Video')
88
print('Press ESC in window to stop')
9+
10+
11+
def get_depth():
12+
return frame_convert.pretty_depth_cv(freenect.sync_get_depth()[0])
13+
14+
15+
def get_video():
16+
return frame_convert.video_cv(freenect.sync_get_video()[0])
17+
18+
919
while 1:
10-
depth, timestamp = freenect.sync_get_depth()
11-
rgb, timestamp = freenect.sync_get_video()
12-
cv.ShowImage('Depth', depth.astype(np.uint8))
13-
cv.ShowImage('Video', rgb[:, :, ::-1].astype(np.uint8))
20+
cv.ShowImage('Depth', get_depth())
21+
cv.ShowImage('Video', get_video())
1422
if cv.WaitKey(10) == 27:
1523
break
Lines changed: 23 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,39 @@
11
#!/usr/bin/env python
2+
"""This goes through each kinect on your system, grabs one frame and
3+
displays it. Uncomment the commented line to shut down after each frame
4+
if your system can't handle it (will get very low FPS but it should work).
5+
This will keep trying indeces until it finds one that doesn't work, then it
6+
starts from 0.
7+
"""
28
import freenect
39
import cv
4-
import numpy as np
10+
import frame_convert
511

612
cv.NamedWindow('Depth')
713
cv.NamedWindow('Video')
814
ind = 0
9-
print('Press ESC to stop')
15+
print('%s\nPress ESC to stop' % __doc__)
16+
17+
18+
def get_depth(ind):
19+
return frame_convert.pretty_depth_cv(freenect.sync_get_depth(ind)[0])
20+
21+
22+
def get_video(ind):
23+
return frame_convert.video_cv(freenect.sync_get_video(ind)[0])
24+
25+
1026
while 1:
1127
print(ind)
1228
try:
13-
depth, timestamp = freenect.sync_get_depth(ind)
14-
rgb, timestamp = freenect.sync_get_video(ind)
29+
depth = get_depth(ind)
30+
video = get_video(ind)
1531
except TypeError:
1632
ind = 0
1733
continue
1834
ind += 1
19-
cv.ShowImage('Depth', depth.astype(np.uint8))
20-
cv.ShowImage('Video', rgb[:, :, ::-1].astype(np.uint8))
35+
cv.ShowImage('Depth', depth)
36+
cv.ShowImage('Video', video)
2137
if cv.WaitKey(10) == 27:
2238
break
23-
freenect.sync_stop() # NOTE: May remove if you have good USB bandwidth
39+
#freenect.sync_stop() # NOTE: Uncomment if your machine can't handle it

wrappers/python/demo_cv_thresh_sweep.py

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,16 @@
1111
def disp_thresh(lower, upper):
1212
depth, timestamp = freenect.sync_get_depth()
1313
depth = 255 * np.logical_and(depth > lower, depth < upper)
14-
cv.ShowImage('Depth', depth.astype(np.uint8))
14+
depth = depth.astype(np.uint8)
15+
image = cv.CreateImageHeader((depth.shape[1], depth.shape[0]),
16+
cv.IPL_DEPTH_8U,
17+
1)
18+
cv.SetData(image, depth.tostring(),
19+
depth.dtype.itemsize * depth.shape[1])
20+
cv.ShowImage('Depth', image)
1521
cv.WaitKey(10)
22+
23+
1624
lower = 0
1725
upper = 100
1826
max_upper = 2048

wrappers/python/demo_ipython.py

Lines changed: 0 additions & 29 deletions
This file was deleted.

wrappers/python/demo_kill_async.py

Lines changed: 0 additions & 14 deletions
This file was deleted.

wrappers/python/demo_mp_async.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
#!/usr/bin/env python
22
import freenect
33
import matplotlib.pyplot as mp
4-
import numpy as np
54
import signal
5+
import frame_convert
66

77
mp.ion()
88
image_rgb = None
@@ -12,7 +12,7 @@
1212

1313
def display_depth(dev, data, timestamp):
1414
global image_depth
15-
data = data.astype(np.uint8)
15+
data = frame_convert.pretty_depth(data)
1616
mp.gray()
1717
mp.figure(1)
1818
if image_depth:
@@ -40,6 +40,8 @@ def body(*args):
4040
def handler(signum, frame):
4141
global keep_running
4242
keep_running = False
43+
44+
4345
print('Press Ctrl-C in terminal to stop')
4446
signal.signal(signal.SIGINT, handler)
4547
freenect.runloop(depth=display_depth,

wrappers/python/demo_mp_sync.py

Lines changed: 19 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,39 @@
11
#!/usr/bin/env python
22
import freenect
33
import matplotlib.pyplot as mp
4-
import numpy as np
4+
import frame_convert
55
import signal
66

77
keep_running = True
8-
mp.ion()
9-
mp.figure(1)
10-
mp.gray()
11-
image_depth = mp.imshow(freenect.sync_get_depth()[0].astype(np.uint8),
12-
interpolation='nearest', animated=True)
13-
mp.figure(2)
14-
image_rgb = mp.imshow(freenect.sync_get_video()[0],
15-
interpolation='nearest', animated=True)
8+
9+
10+
def get_depth():
11+
return frame_convert.pretty_depth(freenect.sync_get_depth()[0])
12+
13+
14+
def get_video():
15+
return freenect.sync_get_video()[0]
1616

1717

1818
def handler(signum, frame):
1919
"""Sets up the kill handler, catches SIGINT"""
2020
global keep_running
2121
keep_running = False
22+
23+
24+
mp.ion()
25+
mp.gray()
26+
mp.figure(1)
27+
image_depth = mp.imshow(get_depth(), interpolation='nearest', animated=True)
28+
mp.figure(2)
29+
image_rgb = mp.imshow(get_video(), interpolation='nearest', animated=True)
2230
print('Press Ctrl-C in terminal to stop')
2331
signal.signal(signal.SIGINT, handler)
2432

2533
while keep_running:
2634
mp.figure(1)
27-
image_depth.set_data(freenect.sync_get_depth()[0].astype(np.uint8))
35+
image_depth.set_data(get_depth())
2836
mp.figure(2)
29-
image_rgb.set_data(freenect.sync_get_video()[0])
37+
image_rgb.set_data(get_video())
3038
mp.draw()
3139
mp.waitforbuttonpress(0.01)

wrappers/python/frame_convert.py

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
import cv
2+
import numpy as np
3+
4+
5+
def pretty_depth(depth):
6+
"""Converts depth into a 'nicer' format for display
7+
8+
This is abstracted to allow for experimentation with normalization
9+
10+
Args:
11+
depth: A numpy array with 2 bytes per pixel
12+
13+
Returns:
14+
A numpy array that has been processed whos datatype is unspecified
15+
"""
16+
np.clip(depth, 0, 2**10 - 1, depth)
17+
depth >>= 2
18+
depth = depth.astype(np.uint8)
19+
return depth
20+
21+
22+
def pretty_depth_cv(depth):
23+
"""Converts depth into a 'nicer' format for display
24+
25+
This is abstracted to allow for experimentation with normalization
26+
27+
Args:
28+
depth: A numpy array with 2 bytes per pixel
29+
30+
Returns:
31+
An opencv image who's datatype is unspecified
32+
"""
33+
depth = pretty_depth(depth)
34+
image = cv.CreateImageHeader((depth.shape[1], depth.shape[0]),
35+
cv.IPL_DEPTH_8U,
36+
1)
37+
cv.SetData(image, depth.tostring(),
38+
depth.dtype.itemsize * depth.shape[1])
39+
return image
40+
41+
42+
def video_cv(video):
43+
"""Converts video into a BGR format for opencv
44+
45+
This is abstracted out to allow for experimentation
46+
47+
Args:
48+
video: A numpy array with 1 byte per pixel, 3 channels RGB
49+
50+
Returns:
51+
An opencv image who's datatype is 1 byte, 3 channel BGR
52+
"""
53+
video = video[:, :, ::-1] # RGB -> BGR
54+
image = cv.CreateImageHeader((video.shape[1], video.shape[0]),
55+
cv.IPL_DEPTH_8U,
56+
3)
57+
cv.SetData(image, video.tostring(),
58+
video.dtype.itemsize * 3 * video.shape[1])
59+
return image

0 commit comments

Comments
 (0)