CFU Playground (3) - ML on a board (Arduino nano 33 ble )
I am going through this step without running Tensorflow lite micro in the Risc-V environment in the CFU Playground. It doesn't matter if you are familiar with Tensorflow lite micro, but if not like me, you need to port your own model and check if it works properly. The reason for choosing Arduino nano 33 ble is that it has luxurious specifications compared to Arduino uno. It has Cortex-M4, 64Mhz, 256KB SRAM, 1MB Flash Memory, and sensors such as a gyroscope built-in. So, you can do interesting experiments in the future. Starting with a board with too little memory and poor CPU performance can incur many complicated errors. And you have to spend a lot of effort to optimize your code for fitting into the memory.
When it comes to price, ESP32 DevKit can also be an alternative. But Tensorflow Lite Micro in ESP32 is supported only on ESP-IDE. The ESP-IDE is less convient than Arduino IDE. Above all, the advantage is that Pete Warden, an early contributor to TinyML, provides interesting examples using this board. Most of the tutorials are on Github, but it would be helpful to refer to the book.
After installing Arduino IDE, click Tools -> Manage Libraries to open a pop-up window. Then search "Arduino_TensorFlowLite" and install it. After that, open File -> Examples -> Arduino_TensorFlowLite -> hello_world and modify only the necessary parts. That's it! The hello_world example implements a regression model estimating a sine wave. Since the input of the CNN model is a periodic function, I could reuse many parts. The most important part is to insert the model.cpp file created in the previous blog and connect the input/output to the Tensorflow lite micro library.
* In the hello_world tab, edit the following part.
- static tflite::AllOpsResolver resolver; //Reduce unnecessary memory waste by defining only necessary networks. You can check the required network with a program called Netron that parses the tflite file as shown below.
static tflite::MicroMutableOpResolver<4> micro_op_resolver;
micro_op_resolver.AddConv2D();
micro_op_resolver.AddReshape();
micro_op_resolver.AddFullyConnected();
micro_op_resolver.AddFullyConnected();
micro_op_resolver.AddConv2D();
micro_op_resolver.AddReshape();
micro_op_resolver.AddFullyConnected();
micro_op_resolver.AddFullyConnected();
- In hello_world, its input has only one int8_t value for an inference. But we need 36 floating numbers for our input. Also, it used the quantization option during converting the model. So, it needs some codes for handling it but we don't need it for I didn't use the quantization option for simplicity. Just replace the input code as below. If we use pointer operation, we will save more memory. But, for the stability, I chose array variables.
for (int j=0; j<input_length; j++){
model_input->data.f[j] = x_input[j];
}
model_input->data.f[j] = x_input[j];
}
- After replace model.cpp, compile/upload codes into the Arduino board
- To use Tool -> Serial plotter, output is mulpiplied by 100 and casted to integer. As shown in the below figure, it performs well the inference.
Comments
Post a Comment