Simulink and the Signal Processing Blockset are two of the MathWorks products for designing and analyzing fixed-point digital signal processing (DSP ) systems. If you already use those tools to develop signal-processing algorithms, you probably have had to implement your algorithms on embedded DSPs. MathWorks code-generation tools can produce the C code to implement your algorithms on any embedded DSP, such as those from the TI C5000 family or the Blackfin from Analog Devices. In this article we will demonstrate how to use Real-Time Workshop Embedded Coder to convert your Simulink models to ANSI/ISO C code and deploy that code on a fixed-point processor.

Many engineers implement their Simulink representations of DSP algorithms by manually writing C and assembly code. That approach is labor intensive, time consuming, and often leads to translation and coding errors. Code-generation tools, on the other hand, let you spend more time optimizing algorithms, a practice that typically results in final designs that can be converted to higher-quality C code.

Here we show how to use Real-Time Workshop Embedded Coder to generate C code for a signal-processing algorithm and then deploy that code to an embedded DSP. For the sake of clarity we use a simple audio-processing application, but as you will see, the example can be extended to other DSP algorithms. Our deployment hardware is the TI C5510 DSK, which has a TI C5510 fixed-point DSP and some basic audio hardware.

Design Audio Filters and Convert to Fixed Point

We begin with a Simulink model of a set of audio-signal filters. We specify the audio filters’ low-pass and high-pass frequency responses with the Filter Design and Analysis Tool from the Signal Processing Toolbox . Figures 1a, 1b, and 1c show a top-level filtering model and the lower-level subsystems. We apply a test source to the model’s audio filtering subsystem. The switch2 and switch3 control signals select a low-pass, high-pass, or unfiltered audio response.

Next we convert the filters to fixed-point implementation. In Simulink Fixed Point, set the fixed-point characteristics of the filters and test environment in accordance with the word-length characteristics of the TI C5510 DSP.

 

Audio Filtering Model
Figure 1a. Audio Filtering Model. Click on image to see enlarged view.


AudioFilters Subsystem
Figure 1b. AudioFilters Subsystem. Click on image to see enlarged view.


LowPassFilters Subsystem
Figure 1c. LowPassFilters Subsystem

 

Generate Code for the Audio Filtering Subsystem

In Simulink, you can generate code for either an entire model or a subsystem. In this case we generate code for the AudioFilters subsystem, shown in Figure 1b.

To interface the subsystem's generated code with externally written code we must first define the subsystem’s input and output signals. We do this by naming the signals in the model, such as pfiltInLeft , and then setting the Real-Time Workshop storage class. There are four storage class settings:

  • Auto
  • ExportedGlobal
  • ImportedExtern
  • ImportedExternPointer

Use either of the last two selections if you intend to declare the variables for those signals in your manually written code. Here we use the ImportedExternPointer optio n for the input and output audio signals (pfiltInLeft , pfiltInRight , pfiltOutLeft , pfiltOutRight ). Choosing ImportedExternPointer requires us to place the declaration for the pointers in the manually written code. Choose ImportedExtern for the switch control signals (switch2 , switch3 ), and declare those variables in the manually written code.

After defining the subsystem interface we can prepare to generate code by setting the options that define the system-timing model, the style of generated code, and the data characteristics of our target processor. Select the following settings:

Solver: This sets up the fundamental clock for the generated code. Set the Solver Type to Fixed Step , Solver to Discrete , and Fixed Step Size to Auto .
Real-Time Workshop: Set the System Target File to ert.tlc , and then use the "auto configures for optimized fixed-point code" option (see Figure 2).
Hardware Implementation: Set the Device Type to Custom , and enter the Number of Bits for each data type in the generated code. All the TI C5510 word lengths are 16 bits except for the data type long , which is 32 bits.

Menu for Setting System Target File
Figure 2. Menu for Setting System Target File. Click on image to see enlarged view.

We can now launch the code-generation process, for which Figure 3 shows the report and which will produce the following files:

  • AudioFilters.c : Contains the entry points for all code implementing the DSP algorithm. In this example it contains AudioFilters_step() and AudioFilters_initialize() functions.
  • AudioFilters_data.c : Contains the initial conditions and coefficients for the digital filters.
  • AudioFilters.h : Declares data structures and a public interface to the model.
  • ert_main.c : Provides an example interface for calling the generated code.

Code Generation Report

Figure 3. Code Generation Report. Click on image to see enlarged view.



The application need not use all the generated code. For example, the ert_main.c file provides a harness that shows how to call the audio processing code. The scheduling function, rt_OneStep() , calls the audio processing code, which is contained in the AudioFilters_step() function. In most cases the correct practice is to call the rt_OneStep() function to implement your algorithm, but in an example as simple as this it is more straightforward to call the AudioFilters_step() function.

Real-Time Workshop Embedded Coder generates documented and readable code. The code sample below shows a portion of the AudioFilters.c file, including the beginning statements for the AudioFilters_step() function.



Generated Code Sample for Audio Filtering Subsystem (from AudioFilters.c, lines 1-41)

Set Up Framework for Calling Generated Code

Texas Instruments provides demonstration applications with the 5510 DSK. We can use the audio-filtering demo (the CCS project, dsk_app1.pjt ) as a framework to call the code generated for this example. This framework must

  • Control the device peripherals on the target hardware board
  • Set up the RTOS resources used by the C5510 DSP
  • Create a scheduler that will call the generated code at an appropriate hardware interrupt

The device peripherals and RTOS resources were set up using their Board Support Library and Chip Support Library for the 5510 DSK. The application code stores audio data in buffers that can hold 1024 samples and, when the buffers are full, generates a hardware interrupt that calls our generated code.

Integrate Generated Code with Framework

Next we insert the audio filtering subsystem’s generated code into the programming framework. We begin by inserting an include statement for the generated AudioFilters.h file into the application code, shown below. This header file is the interface to the rest of the generated code and header files.

 

Insert #include Statements for Generated Code Header Files (from dsk_app1.c, lines 136-147)

 

The generated code uses input- and output-signal pointers, such as pfiltInLeft , to access the audio data. We must declare the pointers in the manually written code since we specified their storage class to be ImportedExternPointer . We also must declare the switch variables (switch2 , switch3 ), but since their storage class is ImportedExtern , we do not declare the variables as pointers, as shown in the code below.


Declare Pointers for Audio Filter Input and Output Signals (from dsk_app1.c, lines 216-222)

 

When the input audio buffer is full the handwritten code creates an interrupt that calls the process_buffer() function. The AudioFilters_step() function is then called to apply the selected audio filter to the audio data, shown below. This is the only call that must be made to execute the generated code for the DSP algorithm that we designed in Simulink.


Read DIP Switch Settings, Assign Pointers to Memory Locations, and Call AudioFilters_step() Function (from dsk_app1.c, lines 312-334)

 

Collect the manually written and generated code in a TI CCS project. That action causes Real-Time Workshop Embedded Coder to generate the MAKE file, AudioFilters.mk , to which you can refer for the compilation environment settings. Check the Include Path section to find the MathWorks directories that must be included (<MATLABROOT>/rtw/c/libsrc and <MATLABROOT>/toolbox/dspblks/include ), and set the compiler’s build options to include those directories.

Compile the TI CCS project to create a DSP executable that can be downloaded to the DSK. Once the executable is running on the DSK, we can play audio test signals on the board through the DSK's audio I/O ports, and control the filtering mode with the DSK's DIP switches.

Verify the Code

Verify the application on the DSK with the test signals from the original Simulink model. Figure 4 shows a test harness built out of Source and Sink blocks from the Signal Processing Blockset. This model generates an audio signal and sends it from a PC speaker port to the 5510 DSK through a patch cord. The test harness model receives the processed audio from the DSK through another patch cord plugged into a Microphone input, displaying the audio signal’s frequency response in Simulink.

Test Harness for Hardware Verification




Figure 4. Test Harness For Hardware Verification



Extending This Example to Other DSP Algorithms and Processors

We can easily leverage the framework of this example to other algorithms and processors. For example, we can build a Simulink model of a denoising algorithm such as the one shown in Figures 5a and 5b and generate code for it using the process we used here. This case, in which Real-Time Workshop Embedded Coder generates approximately 1,600 lines of C code for the two-stage denoising algorithm, is perhaps a better illustration of the benefits of code generation, as 1,600 lines is a significant amount of hand coding. This denoising algorithm can still be called with a single call to a generated _step() function. The single points of declaration and entry for the code generated by Real-Time Workshop Embedded Coder make it easy to reuse your manually written framework for many audio processing algorithms. Using the process described in this paper, other development environments such as Analog Devices VisualDSP++ can collect the generated code as a project that the Blackfin and other fixed-point processors can implement.

Denoising Subystem

 

 

 

 

 

 

 

 

 

Figure 5a. Denoising Subystem (two-stage denoising). Click on image to see enlarged view.

 

Denoise Two-Stage Subsystem

 

Figure 5b. Denoise Two-Stage Subsystem. Click on image to see enlarged view.

 

Logo

瓜分20万奖金 获得内推名额 丰厚实物奖励 易参与易上手

更多推荐