Loading [MathJax]/jax/output/CommonHTML/config.js
前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >Cozmo人工智能机器人SDK使用笔记(3)-视觉部分vision

Cozmo人工智能机器人SDK使用笔记(3)-视觉部分vision

作者头像
zhangrelay
发布于 2019-01-28 02:46:34
发布于 2019-01-28 02:46:34
75100
代码可运行
举报
运行总次数:0
代码可运行

关于机器人感知-视觉部分,有过一次公开分享,讲稿全文和视屏实录,参考如下CSDN链接:

机器人感知-视觉部分(Robotic Perception-Vision Section):

https://blog.csdn.net/ZhangRelay/article/details/81352622


Cozmo视觉Vision也可以完成很多功能,宠物、方块、人脸等识别和跟踪等,非常有趣。

中文

英文

这就是教程tutorials中第三部分vision中的内容。


1. light when face

当检测到人脸在图像中识别并点亮cozmo背部的LED灯。

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
#!/usr/bin/env python3

# Copyright (c) 2016 Anki, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License in the file LICENSE.txt or at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

'''Wait for Cozmo to see a face, and then turn on his backpack light.

This is a script to show off faces, and how they are easy to use.
It waits for a face, and then will light up his backpack when that face is visible.
'''

import asyncio
import time

import cozmo


def light_when_face(robot: cozmo.robot.Robot):
    '''The core of the light_when_face program'''

    # Move lift down and tilt the head up
    robot.move_lift(-3)
    robot.set_head_angle(cozmo.robot.MAX_HEAD_ANGLE).wait_for_completed()

    face = None

    print("Press CTRL-C to quit")
    while True:
        if face and face.is_visible:
            robot.set_all_backpack_lights(cozmo.lights.blue_light)
        else:
            robot.set_backpack_lights_off()

            # Wait until we we can see another face
            try:
                face = robot.world.wait_for_observed_face(timeout=30)
            except asyncio.TimeoutError:
                print("Didn't find a face.")
                return

        time.sleep(.1)


cozmo.run_program(light_when_face, use_viewer=True, force_viewer_on_top=True)

2. face follower

识别人脸并跟随,控制头部角度和履带运动调整是人脸处于采集图像的中间位置(x,y两轴)。

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
#!/usr/bin/env python3

# Copyright (c) 2016 Anki, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License in the file LICENSE.txt or at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

'''Make Cozmo turn toward a face.

This script shows off the turn_towards_face action. It will wait for a face
and then constantly turn towards it to keep it in frame.
'''

import asyncio
import time

import cozmo


def follow_faces(robot: cozmo.robot.Robot):
    '''The core of the follow_faces program'''

    # Move lift down and tilt the head up
    robot.move_lift(-3)
    robot.set_head_angle(cozmo.robot.MAX_HEAD_ANGLE).wait_for_completed()

    face_to_follow = None

    print("Press CTRL-C to quit")
    while True:
        turn_action = None
        if face_to_follow:
            # start turning towards the face
            turn_action = robot.turn_towards_face(face_to_follow)

        if not (face_to_follow and face_to_follow.is_visible):
            # find a visible face, timeout if nothing found after a short while
            try:
                face_to_follow = robot.world.wait_for_observed_face(timeout=30)
            except asyncio.TimeoutError:
                print("Didn't find a face - exiting!")
                return

        if turn_action:
            # Complete the turn action if one was in progress
            turn_action.wait_for_completed()

        time.sleep(.1)


cozmo.run_program(follow_faces, use_viewer=True, force_viewer_on_top=True)

3. annotate

此示例使用tkviewer在屏幕上显示带注释的摄像头图像并使用两种不同的方法添加了一些自己的自定义注释。

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
#!/usr/bin/env python3

# Copyright (c) 2016 Anki, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License in the file LICENSE.txt or at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

'''Display a GUI window showing an annotated camera view.

Note:
    This example requires Python to have Tkinter installed to display the GUI.
    It also requires the Pillow and numpy python packages to be pip installed.

The :class:`cozmo.world.World` object collects raw images from Cozmo's camera
and makes them available as a property (:attr:`~cozmo.world.World.latest_image`)
and by generating :class:`cozmo.world.EvtNewCamerImages` events as they come in.

Each image is an instance of :class:`cozmo.world.CameraImage` which provides
access both to the raw camera image, and to a scalable annotated image which
can show where Cozmo sees faces and objects, along with any other information
your program may wish to display.

This example uses the tkviewer to display the annotated camera on the screen
and adds a couple of custom annotations of its own using two different methods.
'''


import sys
import time

try:
    from PIL import ImageDraw, ImageFont
except ImportError:
    sys.exit('run `pip3 install --user Pillow numpy` to run this example')

import cozmo


# Define an annotator using the annotator decorator
@cozmo.annotate.annotator
def clock(image, scale, annotator=None, world=None, **kw):
    d = ImageDraw.Draw(image)
    bounds = (0, 0, image.width, image.height)
    text = cozmo.annotate.ImageText(time.strftime("%H:%m:%S"),
            position=cozmo.annotate.TOP_LEFT)
    text.render(d, bounds)

# Define another decorator as a subclass of Annotator
class Battery(cozmo.annotate.Annotator):
    def apply(self, image, scale):
        d = ImageDraw.Draw(image)
        bounds = (0, 0, image.width, image.height)
        batt = self.world.robot.battery_voltage
        text = cozmo.annotate.ImageText('BATT %.1fv' % batt, color='green')
        text.render(d, bounds)


def cozmo_program(robot: cozmo.robot.Robot):
    robot.world.image_annotator.add_static_text('text', 'Coz-Cam', position=cozmo.annotate.TOP_RIGHT)
    robot.world.image_annotator.add_annotator('clock', clock)
    robot.world.image_annotator.add_annotator('battery', Battery)

    time.sleep(2)

    print("Turning off all annotations for 2 seconds")
    robot.world.image_annotator.annotation_enabled = False
    time.sleep(2)

    print('Re-enabling all annotations')
    robot.world.image_annotator.annotation_enabled = True

    # Disable the face annotator after 10 seconds
    time.sleep(10)
    print("Disabling face annotations (light cubes still annotated)")
    robot.world.image_annotator.disable_annotator('faces')

    # Shutdown the program after 100 seconds
    time.sleep(100)


cozmo.run_program(cozmo_program, use_viewer=True, force_viewer_on_top=True)

4. exposure

此示例演示了使用自动曝光和手动曝光Cozmo的摄像头图像。当前的摄像头设置会叠加到PC上查看器窗口。

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
#!/usr/bin/env python3

# Copyright (c) 2017 Anki, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License in the file LICENSE.txt or at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

'''Demonstrate the manual and auto exposure settings of Cozmo's camera.

This example demonstrates the use of auto exposure and manual exposure for
Cozmo's camera. The current camera settings are overlayed onto the camera
viewer window.
'''


import sys
import time

try:
    from PIL import ImageDraw, ImageFont
    import numpy as np
except ImportError:
    sys.exit('run `pip3 install --user Pillow numpy` to run this example')

import cozmo


# A global string value to display in the camera viewer window to make it more
# obvious what the example program is currently doing.
example_mode = ""


# An annotator for live-display of all of the camera info on top of the camera
# viewer window.
@cozmo.annotate.annotator
def camera_info(image, scale, annotator=None, world=None, **kw):
    d = ImageDraw.Draw(image)
    bounds = [3, 0, image.width, image.height]

    camera = world.robot.camera
    text_to_display = "Example Mode: " + example_mode + "\n\n"
    text_to_display += "Fixed Camera Settings (Calibrated for this Robot):\n\n"
    text_to_display += 'focal_length: %s\n' % camera.config.focal_length
    text_to_display += 'center: %s\n' % camera.config.center
    text_to_display += 'fov: <%.3f, %.3f> degrees\n' % (camera.config.fov_x.degrees,
                                                        camera.config.fov_y.degrees)
    text_to_display += "\n"
    text_to_display += "Valid exposure and gain ranges:\n\n"
    text_to_display += 'exposure: %s..%s\n' % (camera.config.min_exposure_time_ms,
                                               camera.config.max_exposure_time_ms)
    text_to_display += 'gain: %.3f..%.3f\n' % (camera.config.min_gain,
                                               camera.config.max_gain)
    text_to_display += "\n"
    text_to_display += "Current settings:\n\n"
    text_to_display += 'Auto Exposure Enabled: %s\n' % camera.is_auto_exposure_enabled
    text_to_display += 'Exposure: %s ms\n' % camera.exposure_ms
    text_to_display += 'Gain: %.3f\n' % camera.gain
    color_mode_str = "Color" if camera.color_image_enabled else "Grayscale"
    text_to_display += 'Color Mode: %s\n' % color_mode_str

    text = cozmo.annotate.ImageText(text_to_display,
                                    position=cozmo.annotate.TOP_LEFT,
                                    line_spacing=2,
                                    color="white",
                                    outline_color="black", full_outline=True)
    text.render(d, bounds)


def demo_camera_exposure(robot: cozmo.robot.Robot):
    global example_mode

    # Ensure camera is in auto exposure mode and demonstrate auto exposure for 5 seconds
    camera = robot.camera
    camera.enable_auto_exposure()
    example_mode = "Auto Exposure"
    time.sleep(5)

    # Demonstrate manual exposure, linearly increasing the exposure time, while
    # keeping the gain fixed at a medium value.
    example_mode = "Manual Exposure - Increasing Exposure, Fixed Gain"
    fixed_gain = (camera.config.min_gain + camera.config.max_gain) * 0.5
    for exposure in range(camera.config.min_exposure_time_ms, camera.config.max_exposure_time_ms+1, 1):
        camera.set_manual_exposure(exposure, fixed_gain)
        time.sleep(0.1)

    # Demonstrate manual exposure, linearly increasing the gain, while keeping
    # the exposure fixed at a relatively low value.
    example_mode = "Manual Exposure - Increasing Gain, Fixed Exposure"
    fixed_exposure_ms = 10
    for gain in np.arange(camera.config.min_gain, camera.config.max_gain, 0.05):
        camera.set_manual_exposure(fixed_exposure_ms, gain)
        time.sleep(0.1)

    # Switch back to auto exposure, demo for a final 5 seconds and then return
    camera.enable_auto_exposure()
    example_mode = "Mode: Auto Exposure"
    time.sleep(5)


def cozmo_program(robot: cozmo.robot.Robot):
    robot.world.image_annotator.add_annotator('camera_info', camera_info)

    # Demo with default grayscale camera images
    robot.camera.color_image_enabled = False
    demo_camera_exposure(robot)

    # Demo with color camera images
    robot.camera.color_image_enabled = True
    demo_camera_exposure(robot)


cozmo.robot.Robot.drive_off_charger_on_connect = False  # Cozmo can stay on his charger for this example
cozmo.run_program(cozmo_program, use_viewer=True, force_viewer_on_top=True)

Fin


本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2019年01月24日,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
暂无评论
推荐阅读
编辑精选文章
换一批
Cozmo人工智能机器人SDK使用笔记(8)-应用部分apps
程序启动时,Cozmo会四处寻找黄色。 点击立方体亮黄色可将Cozmo的目标颜色切换为黄色,蓝色,红色和绿色等。 点击闪烁的白色立方体,让观看者显示Cozmo的像素化摄像机视图。
zhangrelay
2019/01/31
1.1K0
Cozmo人工智能机器人SDK使用笔记(2)-显示部分face
这篇博文针对SDK教程中的第二部分cozmo_face进行简单介绍,如下: face是cozmo显示的核心部分: 来学习一下,如何操作吧~ 分为3个文件,如上图所示。 1. face image co
zhangrelay
2019/01/28
6840
Cozmo人工智能机器人SDK使用笔记(1)
如(3.0.0和1.4.6)或(3.1.0和1.4.7)。不严格对应,无法正常使用SDK。
zhangrelay
2019/01/23
1.5K0
Cozmo人工智能机器人SDK使用笔记(4)-任务部分cubes_and_objects
接着,就自然过渡到第四部分----立方体和物体任务部分,共有13个项目专题,非常有趣。
zhangrelay
2019/01/28
6960
Cozmo人工智能机器人SDK使用笔记(6)-并行部分Parallel_Action
Cozmo并行动作示例。 此示例演示如何并行(而不是按顺序)执行动作。 ---- import sys import time try: from PIL import Image except ImportError: sys.exit("Cannot import from PIL: Do `pip3 install --user Pillow` to install") import cozmo from cozmo.util import degrees, distanc
zhangrelay
2019/01/31
3990
Cozmo人工智能机器人SDK使用笔记(9)-判断部分if_this_then_that
此示例演示了如何使用“If This Then That”(http://ifttt.com)使Cozmo在Gmail帐户收到电子邮件时作出回应。以下说明将引导您在IFTTT网站上设置小程序。当调用applet触发器(发送在此示例中启动的Web服务器收到的Web请求)时,Cozmo将播放动画,说出电子邮件发件人的姓名并在他的脸上显示邮箱图像。
zhangrelay
2019/01/31
6390
Cozmo人工智能机器人SDK使用笔记(5)-时序部分async_sync
Cozmo首先寻找一个立方体。 找到立方体后,立方体的灯以循环方式绿色闪烁,然后等待轻敲立方体。
zhangrelay
2019/01/31
5140
Cozmo人工智能机器人SDK使用笔记(X)-总结- |人工智能基础(中小学版)实践平台|
|人工智能基础(中小学版)实践平台| Cozmo人工智能机器人SDK使用笔记(X)-总结-
zhangrelay
2019/01/31
1.3K0
Cozmo人工智能机器人SDK使用笔记(X)-总结- |人工智能基础(中小学版)实践平台|
Cozmo人工智能机器人SDK使用笔记(7)-补充说明
有关示例程序的教程,请参考tutorials文件夹,从01_basics子目录中的hello_world到更高级的场景和操作示例。
zhangrelay
2022/04/29
4110
机器人体验营笔记(五)总结 Cozmo+ROS+AI
版权声明:署名,允许他人基于本文进行创作,且必须基于与原先许可协议相同的许可协议分发本文 (Creative Commons)
zhangrelay
2019/08/15
1.2K0
机器人体验营笔记(五)总结 Cozmo+ROS+AI
机器人体验营笔记(二)基础
版权声明:署名,允许他人基于本文进行创作,且必须基于与原先许可协议相同的许可协议分发本文 (Creative Commons)
zhangrelay
2019/08/15
8300
机器人体验营笔记(二)基础
人工智能基础(高中版)教材补充和资源分享之番外篇 Cozmo+Python+ROS+AI
ROS Melodic的迷失与救赎::https://blog.csdn.net/column/details/28058.html
zhangrelay
2019/01/23
8280
机器人工程专业实践镜像2021版-功能扩展-coppeliasim+webots
镜像虽然提供了大部分课程所需功能,但同样支持扩展。这里以两款仿真软件为例 coppeliasim webots 其实就是在官网下载,解压到硬盘就可以使用的。 分别解压就行。 启动V-Rep(新版为coppeliasim) : ./vrep.sh 启动webots: ./webots 等待启动完成,即可愉快玩耍。忽略更新。 缺少的功能包依据上学期课程讲解,或者依据提示补充安装即可。 现在打开一个cpp案例: 尝试一下编译: 完全可以正常使用。 void Driver::displ
zhangrelay
2021/12/02
6940
机器人工程专业实践镜像2021版-功能扩展-coppeliasim+webots
机器人体验营笔记(一)概要
版权声明:署名,允许他人基于本文进行创作,且必须基于与原先许可协议相同的许可协议分发本文 (Creative Commons)
zhangrelay
2019/08/15
5920
机器人体验营笔记(一)概要
ROS2机器人案例学习linorobot2机器人模型
ROS2之G/F版本支持较多的机器人为turtlebot3和即将发布的turtlebot4:
zhangrelay
2022/05/10
7880
ROS2机器人案例学习linorobot2机器人模型
提示词工程让儿童编程轻而易举
要求: Cozmo SDK 安装了cozmo应用程序的安卓或IOS设备 Windows PC
zhangrelay
2023/07/11
2550
提示词工程让儿童编程轻而易举
使用Scratch2和ROS进行机器人图形化编程学习
Scratch是一款由麻省理工学院(MIT)设计开发的少儿编程工具,Python是近年来非常流行的机器人和人工智能编程语言,ROS是机器人操作系统。
zhangrelay
2019/01/31
9790
使用Scratch2和ROS进行机器人图形化编程学习
ROS2机器人实验报告提示05➡寻路⬅Nav2
对比ros2 launch nav2_bringup tb3_simulation_launch.py和ros2 launch sam_bot_description display.launch.py。
zhangrelay
2021/12/02
8100
ROS2机器人实验报告提示05➡寻路⬅Nav2
ROS机器人URDF建模
所有内容均在ROS1 indigo,kinetic,melodic,noetic以及ROS2 dashing,foxy等测试通过。
zhangrelay
2020/09/16
2K0
ROS机器人URDF建模
Webots中使用大疆“御”2专业版-DJI-Mavic 2 Pro进行无人机仿真实践
兼顾体积和性能的超强无人机大疆-“御”-DJI-Mavic现在有了Webots仿真版,可以零成本愉快玩耍了。
zhangrelay
2019/08/29
2.1K0
Webots中使用大疆“御”2专业版-DJI-Mavic 2 Pro进行无人机仿真实践
推荐阅读
相关推荐
Cozmo人工智能机器人SDK使用笔记(8)-应用部分apps
更多 >
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档
本文部分代码块支持一键运行,欢迎体验
本文部分代码块支持一键运行,欢迎体验