Forum | Documentation | Website | Blog

Skip to content
Snippets Groups Projects
Commit b2422c03 authored by Jason Kridner's avatar Jason Kridner
Browse files

Merge branch 'main' of git.beagleboard.org:docs/docs.beagleboard.io

parents 8e5e27f8 ce9618b7
Branches
Tags
1 merge request!20Minor index updates
Showing
with 1460 additions and 2 deletions
......@@ -2,6 +2,7 @@ FROM python:3.9-alpine as sphinx-build-env
RUN apk update
RUN python -m pip install --upgrade pip
RUN pip install -U sphinx
RUN pip install -U sphinx_design
RUN pip install sphinxcontrib-svg2pdfconverter
RUN apk add librsvg
RUN pip install sphinx_rtd_theme
......
VERSION_MAJOR = 0
VERSION_MINOR = 0
PATCHLEVEL = 6
VERSION_TWEAK = 1
PATCHLEVEL = 7
VERSION_TWEAK = 2
EXTRAVERSION = beta
.. _ai_64_edgeai_configuration:
Demo Configuration file
#########################
The demo config file uses YAML format to define input sources, models, outputs
and finally the flows which defines how everything is connected. Config files
for out-of-box demos are kept in ``edge_ai_apps/configs`` folder. The
folder contains config files for all the use cases and also multi-input and
multi-inference case. The folder also has a template YAML file
``app_config_template.yaml`` which has detailed explanation of all the
parameters supported in the config file.
Config file is divided in 4 sections:
#. Inputs
#. Models
#. Outputs
#. Flows
Inputs
======
The input section defines a list of supported inputs like camera, filesrc etc.
Their properties like shown below.
.. code-block:: yaml
inputs:
input0: #Camera Input
source: /dev/video2 #Device file entry of the camera
format: jpeg #Input data format suported by camera
width: 1280 #Width and Height of the input
height: 720
framerate: 30 #Framerate of the source
input1: #Video Input
source: ../data/videos/video_0000_h264.mp4 #Video file
format: h264 #File encoding format
width: 1280
height: 720
framerate: 25
input2: #Image Input
source: ../data/images/%04d.jpg #Sequence of Image files, printf style formatting is used
width: 1280
height: 720
index: 0 #Starting Index (optional)
framerate: 1
All supported inputs are listed in template config file.
Below are the details of most commonly used inputs.
.. _ai_64_edgeai_camera_sources:
Camera sources (v4l2)
---------------------
**v4l2src** GStreamer element is used to capture frames from camera sources
which are exposed as v4l2 devices. In Linux, there are many devices which are
implemented as v4l2 devices. Not all of them will be camera devices. You need
to make sure the correct device is configured for running the demo successfully.
``init_script.sh`` is ran as part of systemd, which detects all cameras connected
and prints the detail like below in the UART console:
.. code-block:: bash
debian@beaglebone:/opt/edge_ai_apps# ./init_script.sh
USB Camera detected
device = /dev/video18
format = jpeg
CSI Camera 0 detected
device = /dev/video2
name = imx219 8-0010
format = [fmt:SRGGB8_1X8/1920x1080]
subdev_id = 2
isp_required = yes
IMX390 Camera 0 detected
device = /dev/video18
name = imx390 10-001a
format = [fmt:SRGGB12_1X12/1936x1100 field: none]
subdev_id = /dev/v4l-subdev7
isp_required = yes
ldc_required = yes
script can also be run manually later to get the camera details.
From the above log we can determine that 1 USB camera is connected
(/dev/video18), and 1 CSI camera is connected (/dev/video2) which is imx219 raw
sensor and needs ISP. IMX390 camera needs both ISP and LDC.
Using this method, you can configure correct device for camera capture in the
input section of config file.
.. code-block:: bash
input0:
source: /dev/video18 #USB Camera
format: jpeg #if connected USB camera supports jpeg
width: 1280
height: 720
framerate: 30
input1:
source: /dev/video2 #CSI Camera
format: auto #let the gstreamer negotiate the format
width: 1280
height: 720
framerate: 30
input2:
source: /dev/video2 #IMX219 raw sensor that nees ISP
format: rggb #ISP will be added in the pipeline
width: 1920
height: 1080
framerate: 30
subdev-id: 2 #needed by ISP to control sensor params via ioctls
input3:
source: /dev/video2 #IMX390 raw sensor that nees ISP
width: 1936
height: 1100
format: rggb12 #ISP will be added in the pipeline
subdev-id: 2 #needed by ISP to control sensor params via ioctls
framerate: 30
sen-id: imx390
ldc: True #LDC will be added in the pipeline
Make sure to configure correct ``format`` for camera input. ``jpeg`` for USB
camera that supports MJPEG (Ex. C270 logitech USB camera). ``auto`` for CSI
camera to allow gstreamer to negotiate the format. ``rggb`` for sensor
that needs ISP.
Video sources
-------------
H.264 and H.265 encoded videos can be provided as input sources to the demos.
Sample video files are provided under ``/opt/edge_ai_apps/data/videos/video_0000_h264.mp4``
and ``/opt/edge_ai_apps/data/videos/video_000_h265.mp4``
.. code-block:: yaml
input1:
source: ../data/videos/video_0000_h264.mp4
format: h264
width: 1280
height: 720
framerate: 25
input2:
source: ../data/videos/video_0000_h265.mp4
format: h265
width: 1280
height: 720
framerate: 25
Make sure to configure correct ``format`` for video input as shown above.
By default the format is set to ``auto`` which will then use the GStreamer
bin ``decodebin`` instead.
Image sources
-------------
JPEG compressed images can be provided as inputs to the demos. A sample set of
images are provided under ``/opt/edge_ai_apps/data/images``. The names of the
files are numbered sequentially and incrementally and the demo plays the files
at the fps specified by the user.
.. code-block:: yaml
input2:
source: ../data/images/%04d.jpg
width: 1280
height: 720
index: 0
framerate: 1
RTSP sources
------------
H.264 encoded video streams either coming from a RTSP compliant IP camera or
via RTSP server running on a remote PC can be provided as inputs to the demo.
.. code-block:: yaml
input0:
source: rtsp://172.24.145.220:8554/test # rtsp stream url, replace this with correct url
width: 1280
height: 720
framerate: 30
.. note::
Usually video streams from any IP camera will be encrypted and cannot be
played back directly without a decryption key. We tested RTSP source by
setting up an RTSP server on a Ubuntu 18.04 PC by refering to this writeup,
`Setting up RTSP server on PC
<https://gist.github.com/Santiago-vdk/80c378a315722a1b813ae5da1661f890>`_
Models
======
The model section defines a list of models that are used in the demo. Path to
the model directory is a required argument for each model and rest are optional
properties specific to given use cases like shown below.
.. code-block:: yaml
models:
model0:
model_path: ../models/segmentation/ONR-SS-871-deeplabv3lite-mobv2-cocoseg21-512x512 #Model Directory
alpha: 0.4 #alpha for blending segmentation mask (optional)
model1:
model_path: ../models/detection/TFL-OD-202-ssdLite-mobDet-DSP-coco-320x320
viz_threshold: 0.3 #Visualization threshold for adding bounding boxes (optional)
model2:
model_path: ../models/classification/TVM-CL-338-mobileNetV2-qat
topN: 5 #Number of top N classes (optional)
Below are some of the use case specific properties:
#. **alpha**: This determines the weight of the mask for blending the semantic
segmentation output with the input image ``alpha * mask + (1 - alpha) * image``
#. **viz_threshold**: Score threshold to draw the bounding boxes for detected
objects in object detection. This can be used to control the number of boxes
in the output, increase if there are too many and decrease if there are very
few
#. **topN**: Number of most probable classes to overlay on image classification
output
The content of the model directory and its structure is discussed in detail in
:ref:`pub_edgeai_import_custom_models`
Outputs
=======
The output section defines a list of supported outputs.
.. code-block:: yaml
outputs:
output0: #Display Output
sink: kmssink
width: 1920 #Width and Height of the output
height: 1080
connector: 39 #Connector ID for kmssink (optional)
output1: #Video Output
sink: ../data/output/videos/output_video.mkv #Output video file
width: 1920
height: 1080
output2: #Image Output
sink: ../data/output/images/output_image_%04d.jpg #Image file name, printf style formatting is used
width: 1920
height: 1080
All supported outputs are listed in template config file.
Below are the details of most commonly used outputs
Display Sink (kmssink)
----------------------
When you have only one display connected to the SK, kmssink will try to use
it for displaying the output buffers. In case you have connected multiple
display monitors (e.g. Display Port and HDMI), you can select a specific display
for kmssink by passing a specific connector ID number.
Following command finds out the connected displays available to use.
**Note**: Run this command outside docker container. The first number in each
line is the connector-id which we will use in next step.
.. code-block:: bash
debian@beaglebone:/opt/edge_ai_apps# modetest -M tidss -c | grep connected
39 38 connected DP-1 530x300 12 38
48 0 disconnected HDMI-A-1 0x0 0 47
From above output, we can see that connector ID 39 is connected. Configure the
connector ID in the output section of the config file.
Video sinks
-----------
The post-processed outputs can be encoded in H.264 format and stored on disk.
Please specify the location of the video file in the configuration file.
.. code-block:: yaml
output1:
sink: ../data/output/videos/output_video.mkv
width: 1920
height: 1080
Image sinks
-----------
The post-processed outputs can be stored as JPEG compressed images.
Please specify the location of the image files in the configuration file.
The images will be named sequentially and incrementally as shown.
.. code-block:: yaml
output2:
sink: ../data/output/images/output_image_%04d.jpg
width: 1920
height: 1080
Flows
=====
The flows section defines how inputs, models and outputs are connected.
Multiple flows can be defined to achieve multi input, multi inference like
below.
.. code-block:: yaml
flows:
flow0: #First Flow
input: input0 #Input for the Flow
models: [model1, model2] #List of models to be used
outputs: [output0, output0] #Outputs to be used for each model inference output
mosaic: #Positions to place the inference outputs in the output frame
mosaic0:
width: 800
height: 450
pos_x: 160
pos_y: 90
mosaic1:
width: 800
height: 450
pos_x: 960
pos_y: 90
flow1: #Second Flow
input: input1
models: [model0, model3]
outputs: [output0, output0]
mosaic:
mosaic0:
width: 800
height: 450
pos_x: 160
pos_y: 540
mosaic1:
width: 800
height: 450
pos_x: 960
pos_y: 540
Each flow should have exactly **1 input**, **n models** to infer the given input
and **n outputs** to render the output of each inference. Along with input, models
and outputs it is required to define **n mosaics** which are the position of the
inference output in the final output plane. This is needed because multiple
inference outputs can be rendered to same output (Ex: Display).
Command line arguments
----------------------
Limited set of command line arguments can be provided, run with '-h' or '--help'
option to list the supported parameters.
.. code-block:: bash
usage: Run : ./app_edgeai.py -h for help
positional arguments:
config Path to demo config file
ex: ./app_edgeai.py ../configs/app_config.yaml
optional arguments:
-h, --help show this help message and exit
-n, --no-curses Disable curses report
default: Disabled
-v, --verbose Verbose option to print profile info on stdout
default: Disabled
This diff is collapsed.
.. _ai_64_edgeai_datasheet:
Datasheet
##########
This chapter describes the performance measurements of the Edge AI Inference
demos.
Performance data of the demos can be auto generated by running following
command on target:
.. code-block:: bash
debian@beaglebone:/opt/edge_ai_apps/tests# ./gen_data_sheet.sh
The performence measurements includes the following
#. **FPS** : Effective framerate at which the application runs
#. **Total time** : Average time taken to process each frame, which includes
pre-processing, inference and post-processing time
#. **Inference time** : Average time taken to infer each frame
#. **CPU loading** : Loading on different CPU cores present
#. **DDR BW** : DDR read and write BW used
#. **HWA Loading** : Loading on different Hardware accelerators present
Following are the latest performance numbers of the C++ demos:
Source : **USB Camera**
====================================
Capture Framerate : **30 fps**
Resolution : **720p**
format : **JPEG**
.. figure:: ./images/edgeai_object_detection.png
:scale: 60
:align: center
GStreamer based data-flow pipeline with USB camera input and display output
.. csv-table::
:header: "Model", "FPS", "Total time (ms)", "Inference time (ms)", "A72 Load (%)", "DDR Read BW (MB/s)", "DDR Write BW (MB/s)", "DDR Total BW (MB/s)", "C71 Load (%)", "C66_1 Load (%)", "C66_2 Load (%) ", "MCU2_0 Load (%)", "MCU2_1 Load (%)", "MSC_0 (%)", "MSC_1 (%)", "VISS (%)", "NF (%)", "LDC (%)", "SDE (%)", "DOF (%)"
ONR-CL-6150-mobileNetV2-1p4-qat,30.80,33.22,3.02,21.60,1596,619,2215,9.0,20.0,9.0,6.0,1.0,22.17,0,0,0,0,0,0
TFL-CL-0000-mobileNetV1-mlperf,30.69,33.19,1.04,15.93,1425,563,1988,5.0,22.0,9.0,6.0,1.0,21.90,0,0,0,0,0,0
TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320,30.69,33.25,5.00,10.24,1534,570,2104,15.0,29.0,9.0,6.0,1.0,22.67,0,0,0,0,0,0
TVM-CL-3410-gluoncv-mxnet-mobv2,30.58,33.21,2.02,22.80,1522,617,2139,6.0,20.0,9.0,6.0,1.0,21.84,0,0,0,0,0,0
Source : **Video**
==============================
Video Framerate : **30 fps**
Resolution : **720p**
Encoding : **h264**
.. figure:: ./images/edgeai_video_source.png
:scale: 60
:align: center
GStreamer based data-flow pipeline with video file input source and display output
.. csv-table::
:header: "Model", "FPS", "Total time (ms)", "Inference time (ms)", "A72 Load (%)", "DDR Read BW (MB/s)", "DDR Write BW (MB/s)", "DDR Total BW (MB/s)", "C71 Load (%)", "C66_1 Load (%)", "C66_2 Load (%) ", "MCU2_0 Load (%)", "MCU2_1 Load (%)", "MSC_0 (%)", "MSC_1 (%)", "VISS (%)", "NF (%)", "LDC (%)", "SDE (%)", "DOF (%)"
ONR-CL-6150-mobileNetV2-1p4-qat,30.52,33.46,3.03,14.28,990,403,1393,2.0,7.0,4.0,1.0,1.0,10.27,0,0,0,0,0,0
TFL-CL-0000-mobileNetV1-mlperf,30.77,33.47,1.07,30.76,746,97,843,2.0,2.0,1.0,1.0,1.0,15.76,0,0,0,0,0,0
TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320,30.56,33.54,5.06,22.58,736,92,828,2.0,2.0,1.0,1.0,1.0,16.9,0,0,0,0,0,0
TVM-CL-3410-gluoncv-mxnet-mobv2,30.64,33.47,2.01,33.33,712,110,822,1.0,1.0,0.0,1.0,1.0,15.3,0,0,0,0,0,0
Source : **CSI Camera (ov5640)**
============================================
Capture Framerate : **30 fps**
Resolution : **720p**
format : **YUYV**
.. figure:: ./images/edgeai_ov5640_camera_source.png
:scale: 60
:align: center
GStreamer based data-flow pipeline for with CSI camera (OV5640) input and display output
.. csv-table::
:header: "Model", "FPS", "Total time (ms)", "Inference time (ms)", "A72 Load (%)", "DDR Read BW (MB/s)", "DDR Write BW (MB/s)", "DDR Total BW (MB/s)", "C71 Load (%)", "C66_1 Load (%)", "C66_2 Load (%) ", "MCU2_0 Load (%)", "MCU2_1 Load (%)", "MSC_0 (%)", "MSC_1 (%)", "VISS (%)", "NF (%)", "LDC (%)", "SDE (%)", "DOF (%)"
ONR-CL-6150-mobileNetV2-1p4-qat,29.57,34.09,3.02,12.21,1671,699,2370,8.0,45.0,9.0,6.0,1.0,21.35,0,0,0,0,0,0
TFL-CL-0000-mobileNetV1-mlperf,29.41,34.15,1.01,10.27,1502,645,2147,5.0,47.0,9.0,6.0,1.0,20.96,0,0,0,0,0,0
TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320,29.36,34.65,5.00,10.5,1610,655,2265,14.0,53.0,9.0,6.0,1.0,21.47,0,0,0,0,0,0
TVM-CL-3410-gluoncv-mxnet-mobv2,29.38,34.17,2.01,11.66,1596,698,2294,6.0,45.0,9.0,5.0,1.0,21.10,0,0,0,0,0,0
Source : **CSI Camera with VISS (imx219)**
======================================================
Capture Framerate : **30 fps**
Resolution : **1080p**
format : **SRGGB8**
.. figure:: ./images/edgeai_rpi_camera_source.png
:scale: 60
:align: center
GStreamer based data-flow pipeline with IMX219 sensor, ISP and display
.. csv-table::
:header: "Model", "FPS", "Total time (ms)", "Inference time (ms)", "A72 Load (%)", "DDR Read BW (MB/s)", "DDR Write BW (MB/s)", "DDR Total BW (MB/s)", "C71 Load (%)", "C66_1 Load (%)", "C66_2 Load (%) ", "MCU2_0 Load (%)", "MCU2_1 Load (%)", "MSC_0 (%)", "MSC_1 (%)", "VISS (%)", "NF (%)", "LDC (%)", "SDE (%)", "DOF (%)"
ONR-CL-6150-mobileNetV2-1p4-qat,30.64,33.19,3.01,15.72,1781,853,2634,9.0,16.0,9.0,13.0,1.0,31.78,0,22.37,0,0,0,0
TFL-CL-0000-mobileNetV1-mlperf,30.59,33.14,1.04,12.78,1612,798,2410,5.0,18.0,9.0,13.0,1.0,31.65,0,22.31,0,0,0,0
TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320,30.56,33.07,5.00,13.30,1730,809,2539,15.0,25.0,9.0,13.0,1.0,32.6,0,22.19,0,0,0,0
TVM-CL-3410-gluoncv-mxnet-mobv2,30.48,33.14,2.01,12.91,1708,852,2560,7.0,16.0,9.0,13.0,1.0,31.83,0,22.26,0,0,0,0
Source : **IMX390 over FPD-Link**
=============================================
Capture Framerate : **30 fps**
Resolution : **1080p**
format : **SRGGB12**
.. figure:: ./images/edgeai_imx390_camera_source.png
:scale: 60
:align: center
GStreamer based data-flow pipeline with IMX390 sensor, ISP, LDC and display
.. csv-table::
:header: "Model", "FPS", "Total time (ms)", "Inference time (ms)", "A72 Load (%)", "DDR Read BW (MB/s)", "DDR Write BW (MB/s)", "DDR Total BW (MB/s)", "C71 Load (%)", "C66_1 Load (%)", "C66_2 Load (%) ", "MCU2_0 Load (%)", "MCU2_1 Load (%)", "MSC_0 (%)", "MSC_1 (%)", "VISS (%)", "NF (%)", "LDC (%)", "SDE (%)", "DOF (%)"
ONR-CL-6150-mobileNetV2-1p4-qat,30.59,33.15,3.09,25.18,2207,1102,3309,10.0,16.0,9.0,14.0,1.0,31.73,0,22.94,0,10.8,0,0
TFL-CL-0000-mobileNetV1-mlperf,30.53,33.15,1.21,16.20,2019,1040,3059,5.0,18.0,9.0,15.0,1.0,32.80,0,23.34,0,10.10,0,0
TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320,30.43,33.13,5.02,23.7,2201,1067,3268,15.0,25.0,9.0,14.0,1.0,32.80,0,22.88,0,9.95,0
TVM-CL-3410-gluoncv-mxnet-mobv2,30.44,33.16,2.12,21.50,2111,1100,3211,7.0,16.0,9.0,15.0,1.0,32.28,0,22.88,0,10.6,0,0
.. _ai_64_edgeai_docker_env:
Docker Environment
###################
Docker is a set of "platform as a service" products that uses the OS-level
virtualization to deliver software in packages called containers.
Docker container provides a quick start environment to the developer to
run the out of box demos and build applications.
The Docker image is based on Ubuntu 20.04.LTS and contains different
open source components like OpenCV, GStreamer, Python and pip packages
which are required to run the demos. The user can choose to install any
additional 3rd party applications and packages as required.
.. _ai_64_edgeai_docker_build_ontarget:
Building Docker image
======================
The `docker/Dockerfile` in the edge_ai_apps repo describes the recipe for
creating the Docker container image. Feel free to review and update it to
include additional packages before building the image.
.. note::
Building Docker image on target using the provided Dockerfile will take
about 15-20 minutes to complete with good internet connection.
Building Docker containers on target can be slow and resource constrained.
The Dockerfile provided will build on target without any issues but if
you add more packages or build components from source, running out of memory
can be a common problem. As an alternative we highly recommend trying
QEMU builds for cross-compiling the images for arm64 architecture on a PC
and then load the compiled image on target.
Initiate the Docker image build as shown,
.. code-block:: bash
debian@beaglebone:/opt/edge_ai_apps/docker#./docker_build.sh
Running the Docker container
============================
Enter the Docker session as shown,
.. code-block:: bash
debian@beaglebone:/opt/edge_ai_apps/docker#./docker_run.sh
This will start a Ubuntu 20.04.LTS image based Docker container and the prompt
will change as below,
.. code-block:: bash
[docker] debian@beaglebone:/opt/edge_ai_apps#
The Docker container has been created in privilege mode, so that it has root
capabilities to all devices on the target system like Network etc.
The container file system also mounts the target file system of /dev, /opt to
access camera, display and other hardware accelerators the SoC has to offer.
.. note::
It is highly recommended to use the docker_run.sh script to launch the
Docker container because this script will take care of saving any changes
made to the filesystem. This will make sure that any modifications to
the Docker filesystem including new package installation, updates to
some files and also command history is saved automatically and is
available the next time you launch the container. The container will
be committed only if you exit from the container explicitly. If you restart
the board without exiting container, any changes done from last saved state
will be lost.
.. note::
After building and running the docker container, one needs to run
``setup_script.sh`` before running any of the demo applications.
Please refer to :ref:`pub_edgeai_install_dependencies` for more details.
.. _ai_64_edgeai_docker_additional_commands:
Handling proxy settings
=======================
If the board running the Docker container is behind a proxy server, the default
settings for downloading files and installing packages via apt-get will not work.
If you are running the board from TI network, docker build and run scripts will
automatically detect and configure necessary proxy settings
For other cases, you need to modify the script ``/usr/bin/setup_proxy.sh``
to add the custom proxy settings required for your network.
Additional Docker commands
==========================
.. note::
This section is provided only for additional reference and not required to
run out-of-box demos
**Commit Docker container**
Generally, containers have a short life cycle. If the container has any local
changes it is good to save the changes on top of the existing Docker image.
When re-running the Docker image, the local changes can be restored.
Following commands show how to save the changes made to the last container.
Note that this is already done automatically by ``docker_run.sh`` when you exit
the container.
.. code-block:: bash
cont_id=`docker ps -q -l`
docker commit $cont_id edge_ai_kit
docker container rm $cont_id
For more information refer:
`Commit Docker image <https://docs.docker.com/engine/reference/commandline/commit/>`_
**Save Docker Image**
Docker image can be saved as tar file by using the command below:
.. code-block:: bash
docker save --output <pre_built_docker_image.tar>
For more information refer here.
`Save Docker image <https://docs.docker.com/engine/reference/commandline/save/>`_
**Load Docker image**
Load a previously saved Docker image using the command below:
.. code-block:: bash
docker load --input <pre_built_docker_image.tar>
For more information refer here.
`Load Docker image <https://docs.docker.com/engine/reference/commandline/load/>`_
**Remove Docker image**
Docker image can be removed by using the command below:
.. code-block:: bash
Remove selected image:
docker rmi <image_name/ID>
Remove all image:
docker image prune -a
For more information refer
`rmi reference <https://docs.docker.com/engine/reference/commandline/rmi/>`_ and
`Image prune reference <https://docs.docker.com/engine/reference/commandline/image_prune/>`_
**Remove Docker container**
Docker container can be removed by using the command below:
.. code-block:: bash
Remove selected container:
docker rm <container_ID>
Remove all container:
docker container prune
For more information refer here.
`rm reference <https://docs.docker.com/engine/reference/commandline/rm/>`_ and
`Container Prune reference <https://docs.docker.com/engine/reference/commandline/container_prune/>`_
Relocating Docker Root Location
===============================
The default location for Docker files is **/var/lib/docker**. Any Docker images
created will be stored here. This will be a problem anytime the SD card is
updated with a new targetfs. If a secondary storage (SSD or USB based storage)
is available, then it is recommended to relocate the default Docker root
location so as to preserve any existing Docker images. Once the relocation
has been done, the Docker content will not be affected by any future targetfs
updates or accidental corruptions of the SD card.
The following steps outline the process for Docker root directory relocation
assuming that the current Docker root is not at the desired location. If the
current location is the desired location then exit this procedure.
1. Run 'Docker info' command and inspect the output. Locate the line with
content **Docker Root Dir**. It will list the current location.
2. To preserve any existing images, export them to .tar files for importing
later into the new location.
3. Inspect the content under /etc/docker to see if there is a file by name
**daemon.json**. If the file is not present then create **/etc/docker/docker.json**
and add the following content. Update the 'key:value' pair for the key "graph"
to reflect the desired root location. If the file already exists, then make
sure that the line with "graph" exists in the file and points to the desired
target location.
.. code-block:: json
{
"graph": "/run/media/nvme0n1/docker_root",
"storage-driver": "overlay",
"live-restore": true
}
In the configuration above, the key/value pair
**'"graph": "/run/media/nvme0n1/docker_root"'** defines the root location
**'/run/media/nvme0n1/docker_root'.**
4. Once the daemon.json file has been copied and updated, run the following
commands
.. code-block:: bash
$ systemctl restart docker
$ docker info
Make sure that the new Docker root appears under **Docker Root Dir** value.
5. If you exported the existing images in step (2) then import them and they
will appear under the new Docker root.
6. Anytime the SD card is updated with a new targetfs, steps (1), (3), and
(4) need to be followed.
**Additional references**
| https://docs.docker.com/engine/reference/commandline/images/
| https://docs.docker.com/engine/reference/commandline/ps/
.. _ai_64_edgeai_getting_started:
Getting Started
#################
.. _ai_64_edgeai_getting_started_harware:
Hardware setup
===============
BeagleBone® AI-64 has TI's TDA4VM SoC which houses dual core A72, high performance vision
accelerators, video codec accelerators, latest C71x and C66x DSP, high bandwidth
realtime IPs for capture and display, GPU, dedicated safety island and security
accelerators. The SoC is power optimized to provide best in class performance
for perception, sensor fusion, localization and path planning tasks in robotics,
industrial and automotive applications.
For more details visit https://www.ti.com/product/TDA4VM
.. _ai_64_edgeai_hw_requirements_eaik:
BeagleBone® AI-64
-----------------
BeagleBone® AI-64 brings a complete system for developing artificial intelligence (AI)
and machine learning solutions with the convenience and expandability of the BeagleBone®
platform and the peripherals on board to get started right away learning and building
applications. With locally hosted, ready-to-use, open-source focused tool chains and
development environment, a simple web browser, power source and network connection
are all that need to be added to start building performance-optimized embedded
applications. Industry-leading expansion possibilities are enabled through
familiar BeagleBone® cape headers, with hundreds of open-source hardware examples
and dozens of readily available embedded expansion options available off-the-shelf.
To run the demos on BeagleBone® AI-64 you will require,
- BeagleBone® AI-64
- USB camera (Any V4L2 compliant 1MP/2MP camera, Eg. Logitech C270/C920/C922)
- Full HD eDP/HDMI display
- Minimum 16GB high performance SD card
- 100Base-T Ethernet cable connected to internet
- UART cable
- External Power Supply or Power Accessory Requirements
a. Nominal Output Voltage: 5VDC
b. Maximum Output Current: 5000 mA
Connect the components to the SK as shown in the image.
.. figure:: ./images/board_connections_bbai_64.jpg
:align: center
BeagleBone® AI-64 for Edge AI connections
.. _ai_64_edgeai_usb_camera:
USB Camera
----------
UVC (USB video class) compliant USB cameras are supported on the BeagleBone® AI-64.
The driver for the same is enabled in linux image. The linux image has been tested with
C270/C920/C922 versions of Logitech USB cameras. Please refer to
:ref:`pub_edgeai_multiple_usb_cams` to stream from multiple USB cameras
simultaneously.
.. _ai_64_edgeai_imx219_sensor:
IMX219 Raw sensor
------------------
**IMX219 camera module** from **Raspberry pi / Arducam** is supported by BeagleBone® AI-64.
It is a 8MP sensor with no ISP, which can transmit raw SRGGB8 frames over CSI lanes at 1080p 60 fps.
This camera module can be ordered from
https://www.amazon.com/Raspberry-Pi-Camera-Module-Megapixel/dp/B01ER2SKFS
The camera can be connected to any of the 2 RPi zero 22 pin camera headers on BB AI-64 as
shown below
.. figure::
:scale: 20
:align: center
TODO: IMX219 CSI sensor connection with BeagleBone® AI-64 for Edge AI
Note that the headers have to be lifted up to connect the cameras
.. note:: To be updated
By default IMX219 is disabled. After connecting the camera you can enable it
by specifying the dtb overlay file in
``/run/media/mmcblk0p1/uenv.txt`` as below,
``name_overlays=k3-j721e-edgeai-apps.dtbo k3-j721e-sk-rpi-cam-imx219.dtbo``
Reboot the board after editing and saving the file.
Two RPi cameras can be connected to 2 headers for multi camera usecases
Please refer :ref:`pub_edgeai_camera_sources` to know how to list all the cameras
connected and select which one to use for the demo.
By default imx219 will be configured to capture at 8 bit, but it also supports
10 bit capture in 16 bit container. To use it in 10 bit mode, below steps are
required:
- Modify the ``/opt/edge_ai_apps/scripts/setup_cameras.sh`` to set the
format to 10 bit like below
.. code-block:: bash
CSI_CAM_0_FMT='[fmt:SRGGB8_1X10/1920x1080]'
CSI_CAM_1_FMT='[fmt:SRGGB8_1X10/1920x1080]'
- Change the imaging binaries to use 10 bit versions
.. code-block:: bash
mv /opt/imaging/imx219/dcc_2a.bin /opt/imaging/imx219/dcc_2a_8b.bin
mv /opt/imaging/imx219/dcc_viss.bin /opt/imaging/imx219/dcc_viss_8b.bin
mv /opt/imaging/imx219/dcc_2a_10b.bin /opt/imaging/imx219/dcc_2a.bin
mv /opt/imaging/imx219/dcc_viss_10b.bin /opt/imaging/imx219/dcc_viss.bin
- Set the input format in the ``/opt/edge_ai_apps/configs/rpiV2_cam_example.yaml``
as ``rggb10``
Software setup
==============
.. _ai_64_edgeai_prepare_sd_card:
Preparing SD card image
-----------------------
Download the ``bullseye-xfce-edgeai-arm64`` image from the links below and
flash it to SD card using `Balena etcher <https://www.balena.io/etcher/>`_ tool.
- To use via SD card: `bbai64-debian-11.4-xfce-edgeai-arm64-2022-08-02-10gb.img.xz <bbai64-debian-11.4-xfce-edgeai-arm64-2022-08-02-10gb.img.xz>`_
- To flash on eMMC: `bbai64-emmc-flasher-debian-11.4-xfce-edgeai-arm64-2022-08-02-10gb.img.xz <https://rcn-ee.net/rootfs/bb.org/testing/2022-08-02/bullseye-xfce-edgeai-arm64/bbai64-emmc-flasher-debian-11.4-xfce-edgeai-arm64-2022-08-02-10gb.img.xz>`_
The Balena etcher tool can be installed either on Windows/Linux. Just download the
etcher image and follow the instructions to prepare the SD card.
.. figure:: ./images/balena_etcher.png
:scale: 100
:align: center
Balena Etcher tool to flash SD card with Processor linux image Linux for Edge AI
The etcher image is created for 16 GB SD cards, if you are using larger SD card,
it is possible to expand the root filesystem to use the full SD card capacity
using below steps
.. code-block:: bash
#find the SD card device entry using lsblk (Eg: /dev/sdc)
#use the following commands to expand the filesystem
#Make sure you have write permission to SD card or run the commands as root
#Unmount the BOOT and rootfs partition before using parted tool
umount /dev/sdX1
umount /dev/sdX2
#Use parted tool to resize the rootfs partition to use
#the entire remaining space on the SD card
#You might require sudo permissions to execute these steps
parted -s /dev/sdX resizepart 2 '100%'
e2fsck -f /dev/sdX2
resize2fs /dev/sdX2
#replace /dev/sdX in above commands with SD card device entry
.. _ai_64_edgeai_poweron_boot:
Power ON and Boot
-----------------
Ensure that the power supply is disconnected before inserting the SD card.
Once the SD card is firmly inserted in its slot and the board is powered ON,
the board will take less than 20sec to boot and display a wallpaper as
shown in the image below.
.. figure::
:scale: 25
:align: center
TODO: BeagleBone® AI-64 wallpaper upon boot
You can also view the boot log by connecting the UART cable to your PC and
use a serial port communications program.
For **Linux OS minicom** works well.
Please refer to the below documentation on 'minicom' for more details.
https://help.ubuntu.com/community/Minicom
When starting minicom, turn on the colors options like below:
.. code-block:: bash
sudo minicom -D /dev/ttyUSB2 -c on
For **Windows OS Tera Term** works well.
Please refer to the below documentation on 'TeraTerm' for more details
https://learn.sparkfun.com/tutorials/terminal-basics/tera-term-windows
.. note::
Baud rate should be configured to 115200 bps in serial port communication
program. You may not see any log in the UART console if you connect to it
after the booting is complete or login prompt may get lost in between boot
logs, press ENTER to get login prompt
As part of the linux systemd ``/opt/edge_ai_apps/init_script.sh`` is executed
which does the below,
- This kills weston compositor which holds the display pipe. This step will
make the wallpaper showing on the display disappear and come back
- The display pipe can now be used by 'kmssink' GStreamer element while
running the demo applications.
- The script can also be used to setup proxies if connected behind a
firewall.
Once Linux boots login as ``root`` user with no password.
.. _ai_64_edgeai_connecting_remotely:
Connect remotely
----------------
If you don't prefer the UART console, you can also access the device with the
IP address that is shown on the display.
With the IP address one can ssh directly to the board, view the contents and run
the demos.
For best experience we recommend using VSCode which can be downloaded from
here.
https://code.visualstudio.com/download
You also require the "Remote development extension pack" installed in VSCode
as mentioned here:
https://code.visualstudio.com/docs/remote/ssh
.. figure::
:scale: 90
:align: center
TODO: Microsoft Visual Studio Code for connecting to BeagleBone® AI-64 for Edge AI via SSH
beaglebone-ai-64/edge_ai_apps/images/TDA4VM-SK-SD-Boot.png

4.3 MiB

beaglebone-ai-64/edge_ai_apps/images/balena_etcher.png

20.1 KiB

beaglebone-ai-64/edge_ai_apps/images/board_connections_bbai_64.jpg

68.7 KiB

beaglebone-ai-64/edge_ai_apps/images/board_connections_tda4vm_evm.jpg

4.19 MiB

beaglebone-ai-64/edge_ai_apps/images/board_connections_tda4vm_sk.jpg

3.39 MiB

beaglebone-ai-64/edge_ai_apps/images/boot_wallpaper.jpg

1.68 MiB

beaglebone-ai-64/edge_ai_apps/images/csi_camera_connection.png

295 KiB

<mxfile host="Electron" modified="2021-05-19T15:45:06.473Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/89.0.4389.128 Electron/12.0.7 Safari/537.36" etag="dAgc-ZEs3_cQbep0zx-v" version="14.6.13" type="device" pages="3"><diagram name="CPP Demo Data Flow" id="XUTmI-InrPcYLR0GOj1Y">7V1bd9q6Ev41eYTlO+YxXJywV5pymn1O075kKSDAjW35yOKWX78lIwO2FAO7xsYlfWiwfAHNN9/MaGZs3+hdf3WHQTj7gsbQu9GU8epG791omqoqGv3DRtZ8RFGUzcgUu2M+tht4ct9hciAfnbtjGKUOJAh5xA3TgyMUBHBEUmMAY7RMHzZBXvpbQzCFwsDTCHji6Hd3TGbbiSm7HffQnc74V9sm3+GD5GA+EM3AGC33hvT+jd7FCJHNJ3/VhR6TXiIX6/H/vvZzcat2p3eTAXlT56Hb2FzMOeWU7RQwDEixl+boLoA35/LicyXrRIAYzYMxZBdRbvTOcuYS+BSCEdu7pDpDx2bE9+iWSj9OXM/rIg/h+Fy92+2bDv1BnSNnwGe6gJjA1R5+fEZ3EPmQ4DU9hO/VEhi5ehrJ9nIPa9vgs5rt4axb/EjAFWy6vfhOiPQDl6Ncpr/QTx9B7/m/a3swazx9cdff7huGIEI4pirJNwMU0D+dtFQRJjM0RQHwHhAKuSx/QULWnFBgTlBa0qdJNEJzPII5x+mcmgBPIcmZHD+OzSgXHww9QNxFmoSFi9qSqK8FfKaUHomFtL81hQHEgFBrlAxPUwfxLQl4D+CV2sWU/IHnTgP6eUTlD6m6d5jSutTu3PIdvjseezHUMHLfwWt8PQZ2iNyAxJIwOzdmj7EGBSTRDwnZ87RMoMrWkvJvTBkrGYUaSlM32ikWJSgfDSK/+JDNbI+cavqqWjt9BTSZRFTZskqw/YlH6UWeNp/LrHXafeXWPKNZUw0tbddatmDWdEMRrZpmF2DVpHMwBJFSocAIj3Ikqx4h2XzVL8hLZKRp6KboJTSJOIvwEdIpmII0FzRMQlEct1y+QHX70gRq5TrdnX/t70Z/3wdTqeL1Mz8/3vjBNppmstlb7e/srflWwb7bEH13ntId9N0JyJaWAtkyz4RdS04G6seYRtaADi0tLSnTlEShpdKhLYgUhGHkBm81kKaq6e3qxPkynGqK+R0vx4+w8dDt2t91nIizXOtSsJVoHWkl2idaiUygci6NVhM9qqnLbKtppTYqV+okQ3OVPvNYNiRqd3F0EJc4G6eJKQ6XzwZVaVVIB/kc8vM2fzYdEjU/zIdjE0AfBJEN/Wz4iWuqGsc8F8CHa15SqZJ8aK7WXSAfxGUV40MtEjaqpab9aPUrKtUWxPnmR3UxL+2Lk2fyZXvyDDFshBiNYBTVQaZbE52URg0xSVtyRK8Kcrsik20eabITSM4Q0sfXuMUYrPcO4PUe7aMyiWpki5h2ppJ76ASVlwd2urP5DYXWVDRNIOwgmECK3qgO62/BpZRKV7lE9SumayLpw3TVTqNrlhxJ6axwa5v8sFpnBbVjrWYSTl5aIkQTl30hikidAomsZdKqDyQqUe0LsUztYw3TqUu/s8cRmaqseTCOyJyglRJHiAupaUQwBD7EdHiMfOAGgvpRLpG0ttBz0BtMujJ4W9V+owYfOr5RR2YM0kpdjj2wrVY6HpQ1tZkSe3C2XELSubOHGWP+7YCOdT0gMbPXhJeVZbbEfJ8LLqn5riRzV7AV1k8M+061roI1NNuZVVjWu24cAj8rg1ARPWvi+upGs+LGxbG7oB/j9kRVs5VViwUDFu9wdPb2CrhvSJc0PmsZRs5AyI7zV1PW/N2ceGg5mgFMmiAIEAHERcHLfq+jByfsW2JHBHF/ATd9jHmNv6Uw0DLS0OkSg2nbIgHNcxGw9ScQ0CiZgLZpS8OX0ggotjhKCPjtrvMR9w6S9RVnD/6kLwNeO0zfllkifa+kEqz6g/8MF0+N/uzRePrVNmdPbZTcEXJw9VN9OiDv56eSlOGc2gsaE+kqk9dohlxZxnKG/Nd5VDUXTCuzhpN01tqSWLJdgEAbD9pYJUvl6+M9fl/cu+8PfzkNsUzjzz3i1rf527JK7FWWmhe7WvOysyg/UgalnLTvsfbld7MrBWR9pXyoODOWBk89G3jSuUuahPKMcIW+Ie/n17WDNHvTRfWGTPQNC8PT6ukWSu2IkLKm4qizUrcgqUJdqGWRlwKrha6sBYO8VlRv6ESn8CuE0zGsgw3LeoTqb5S5lqZR+eSPbhq1L5QL4k0FX+ekTstntZWlRNXr5yvp8cnLHezzIS8wvzQ6XAl0eaH9PnR57vPSoJM0v8cZq5q2a5d6f71coGLXxi4PWFOplrqAdp6f9WDRm73bTzAIwgfXeexv2ViTOqF0Dmfuphbr9IbeNNM+/nyFwrwZ59cJv/w17N/9i0qhcmMoujIJo8+iYa4eCb1rhmzt0xLJXETV8O/gmz78Al6+Pgza+uh5GA2dn3Ujs3QOpZNZtzNkbilnI3PejH+v60Y4AUQhHJEGZnyiV1etVfuTzyfxudWS8Fk9E5/vlFfHX099ezAf3JuOvuy9vUs0gzGXjmzaHodD+n8P+gzfMSCA/mHCF+C7xE5Ivr0xKklod4ZnXmTukLMljR3S1bh2LkzFbF+X9R0z8HifQOHo4c20agdexrabMj5aZWKnCdj9j5Xv6JDjsgIef4pj8BqFsRn9RDQfUcOULH9KRVRMNw58EJvYDaKfCOYjaNlVIyi2TPbcKPTi+1aS1HHh6PEQpm7gaYqlHjapyXNASoFPfH5uhoCfGAoYXhoFxUxn2i1+YihgaKaXnKU6Qnl21RTg+SNLDbl3Fh+uNZz67Kmyuj+u5LEbRxe9c0syF3O7bPa2A4OXA4q6+1WuK8qnrhyhK+drkChGV6x2sboifcS/aFaKU4SG0lQUK60NdJYH9CHeGkLs0qmxjNRm0HuNX4uSxAG/pTdSQRz7qJDLejmF+GCKvJdTzKM/470UH4Vj/+q9FJqRXjEV9GKK7JOAtPQFCnkvRZ4i76nE9o1LgAoGRGJjVZLgZwdEBOGPAusPq+lCXt9x+ma7fTJLTyi0q62MPZbcEK9I4uvt26YK56KYaaJWbIKwD+KnVymDYMJWTNvkxQcguPHqOCXsLL0Is8Tb0ZigQxS5rA5D974iQpDPOM92dMDobRrb9D14JvE/ekj8ZbdxIYkzFCQbE3fFvECH/57ejBD2Oq1bJiTNGY0Do+lSuk1c6i1wc0S/UXM2NQqHjVND44SYOYoXquvuCL5Q+TgqezKHg2FI/UrUDINpjuqU9CCMVvqOYFuSL9Ek93VvB0/QIzb17au8NnzfvRFN7/8D</diagram><diagram name="Python Demo Data Flow" id="OihEl4gLeUUpuXvin1QQ">7V1bc5s6EP41foyH++UxduyedNrTzOlMT3teOgqWbVpAjJATu7/+SCBshPAlCdfWebElQJj99tvV7gplpE/D7TsM4vVHtIDBSFMW25F+N9I01XJM+sF6drxHsZSsZ4X9Be87dHz2f8H8RN678RcwEU4kCAXEj8VOD0UR9IjQBzBGz+JpSxSId43BCkodnz0QyL3/+guy5r2qohwO/AX91Zrf2jH5gRDkJ/OOZA0W6LnQpc9G+hQjRLJv4XYKAya9XC7r5Zf3Nx8WNvoPeffY+qSt/p7dZIPNX3LJ/hEwjEi9Q2vZ0E8g2HB58Wclu1yAGG2iBWSDKCN98rz2CfwcA48dfaY6Q/vWJAxoS6Vfl34QTFGAcHqtPp3OzDn9QRN+G4gJ3JZwOfNQ6l7SVEchCiHBO3odH0XLYeTqaeTt5wLWjsGfal3AWc/1GHAFW+0HPwiRfuFyfIFMdUmEcEFVkjcjFNGPiShVhMkarVAEgg8IxVyWPyAhO04osCFIlHR2EzbyK+RJfx3aYA+eOI/zngC8gqfGs6xqgDAMAPGfxF9Xu6yNCv21QMi0MiCplIqtFYwgBoSao7x7JZzEWxXofQCP1DAKAIDAX0X0u0flDam+T5h2+9Tw3PIDob9YBCnWMPF/gcd0PIZ2jPyIpJIwJyPzjtEGRSRXEPUoV7iJ5CMdDNNZzPWTHLpRxrrhCjTKdfhiEPngD+zJCuxUxVE1VxwBLZcJVa6yEux/4uv1wmzWrk3cmXJrNmjXVEMTDZvtSHZNNxTZrGlOU2bNkkRKhQIT7J2QrHqBZC9S/be6iZI0Dd2U3YRWIc7GnIQtSfOJzpNQkk5c+i9Q3embQJ2TXvfgYGeH3rc7YSpCvPvKr08b31hjbObNu23x4N2Ot5p23taFzts+DbKlCSBbZkPYudVkoP6Oae4A6GBroqRMs2Ia2iod1NOz0D+MD+6FfFCPTGZzQpR88gnwqidErirOs4yyt85+Ib+q/mmRKjtxEMeJH/0cAMlU1RWl1wOWOZXyHMScSLVUpW/ilB3BzzAZinq6vZNnfrOCPGMMb2KMPJgkQ5CpYYoyVQ05DmpZpqoktx47Vrj1ydd8DPq9cBVtHS5ijbacsXrp7LSQZa3DG99iDHaFE3ga5qizVnVT9DcqD8IP6pMNWauLzmeSBc7eR0tIZe8NITCUvEoPGHudClf4hPPs004CbZQy73mCqn74jC7gaxyGS/Pr2RSzDiP4NhTkjGqMEjKkuUTZMmndWyY5C7hKCIYghJh2L1AI/EgSLZULEeVHr0E/YZ6f5hWmYsqad11esqgCTORUO5g5li1gVlnfMysw0xvDTA75mOG5vad90wBUUOFPwssq26MKirULl/s7eg+93pmxNPU13VJ6v2wCG85T6XLgOtKstIa78J/o17RSq2qOsrWZxbZ4sXdeOCrhnrEuXwSilSi5BjE7L9yu2EKY8TJAz94aYDIGUYQIID6KvhfLvgFckryWC/HsCWYl3VOLIFqhoGWI0OkVFtNxZAaaTTFQ7yRgbpyBp2fHb2agYzrCCGbLmWJdDkMrGPjPu8kx8p1l6yMun3zlLwNeO89f22yTv9fwuSgNHo2ejdvcuuzD28CTF0jdR/GGmgyLarbKJOatkV+VWlqj8HGTdE0H0xIlZVYsNHAqJpRuYxKVA+FwExB/uIthLKvrtRu61a2JOViVb4JR6ShDp11oYrKczMU2pqkEnW73CT21awfhXIheZpq79xByPmPQK9F6YM0qljMZgTZM59B9ETuPZ/thXrp2DvawrIsxqGp50+Dl76EMBj05CfAjhqsFHIIlKzuGHliyayRdlMalkbTaj1DakEPpTxsypFhatcuU6DqYNswrI+TUwnn30I81AUbHYXvP0Lt0kXnmVbtHT158kOawBrratvs3kAw5kj5kBgcq1e6j6fzN1N+rfGg2XcA39LEp+vqWC4j5bPt0AfHj+4fZu1eUEJWRoejKMk6u1cSTimSUZnyaURUE2TKfGysnmr/lcgCzNp9+hM+6U+KzrbTMZzkX8IolOdIFIImhR24woxQdXbW27pXSL6K0bVdQWm2V0rqkGoy8tCddFfmwo3SN6Jc7GDKMF4AA+sEAkCDs41pJ3s5My/6Nv9pxdUuvOjkVCz8qo3OtMVzljMeULU1m6PFVBLXDh7PnGhx6JQtvVpHSahU8eZ3CF1bXo11zn1X2+J430WMSp8b0CulpSA2zIhRqF1L57e37EKSGNoP0CuFpCPe72XUHoZx0ufOTOEhfTMzTybXDx6cyQ0NPUyxxA6lKq5rn3drBT87xlCh4BVECsXcklBdoiK7xCqIEoilGn907w/xm1+JDcSOS88WHfqwssDrJAvUVvYtL4caZDbua3hGh/HKCwcsDje6IkO9CdtWVl+lKbUXienTFctvQFXmJTX2acKOMFcUS1YE+5hmFSFsPEPv02VhiKusMHtOtpPOJQAuKY126uuDIqvJ2NvS1Xraj7yb54zbztc7v5qsZYtxU03a+5Z1dNHGA5nbzteRc1n6nekBFCBJ5zVWe72cnJAThY7ProyV2Kc0/n89M190rVRMb3Nklo1zx9rxSMclWFe04lm+jo5xyoqZsiXAI0v2IlPtoyeKmfRLjCAp+GiQL0i4zjDBzvO9NOfqAEp/VZejRR0QIChnt2YEJ8H6uUsNewGeZ/tFT0pvdpoUlTlKQN5b+lrmCCf89d2tC2P8huGVS0ubeIjLGPmXm0qcuA489ekdtntUr5qyf2pp5jJm3+E613ffgd8qsucr2hJhjGFPnkozjaHVCd1raNsMW3x52KvImWsVL4PvOFygSe/T9/0DIGH/4VxL67H8=</diagram><diagram id="ZmDZ4aD7BjtF5CznldHT" name="Page-3">zZVdT4MwFIZ/DZdLaMtw17JNTVzULIveVjgDDNCmK4P56y1ygBHmdIm6XdG+54s+5zS1mJeWN4rLaCECSCxqB6XFphalZEyY+VTKDhWbkloJVRyg1gnL+B0aR1TzOIBNz1ELkehY9kVfZBn4uqdxpUTRd1uLpF9V8hAGwtLnyVB9jgMdoUpsuzPcQhxGWHoyRkPKG2cUNhEPRLEnsZnFPCWErldp6UFS0Wu43Ob523y2XVDmJffqafVy9bAa1cnmp4S0R1CQ6d9N7daptzzJkdddtgZTxwcjP8YS8Ox61wANlcglhoHSUB5qI39t3O0f/jxpiZpZBJGCVjsTh9lp0y4cQzrBfdH11HVQi/ba2QZynKOwzd2xMgvEdQI6OkA3QGVIZQFUSWyLXRdRrGEpuV9ZC3PdjBbp1BSdErP8Eum36NzD6M6Hhg3QSAUjqYR/BBE5AyLM4vaHy3UHs0XoAYDuX/Fzjt3KSwRInQsjOB5OoNjoCx5Bh/4bQbPtHq1P297bz2Yf</diagram></mxfile>
\ No newline at end of file
beaglebone-ai-64/edge_ai_apps/images/edge_ai_demos_CPP_Demo_Data_Flow.png

131 KiB

beaglebone-ai-64/edge_ai_apps/images/edge_ai_demos_Python_Demo_Data_Flow.png

126 KiB

beaglebone-ai-64/edge_ai_apps/images/edgeai-image-classify.jpg

48.2 KiB

beaglebone-ai-64/edge_ai_apps/images/edgeai-multi-input-multi-infer.jpg

213 KiB

beaglebone-ai-64/edge_ai_apps/images/edgeai-object-detect.jpg

44.6 KiB

0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment