Haste makes waste

Nano01(自動運転)-U04-Lesson19-Project:Behavioral cloning(提交版)

Posted on By lijun

Behavioral Cloning


Behavioral Cloning Project

The goals / steps of this project are the following:

  • Use the simulator to collect data of good driving behavior
  • Build, a convolution neural network in Keras that predicts steering angles from images
  • Train and validate the model with a training and validation set
  • Test that the model successfully drives around track one without leaving the road
  • Summarize the results with a written report

一、Files Submitted & Code Quality

1. Submission includes all required files and can be used to run the simulator in autonomous mode

My project includes the following files:

  • model.py containing the script to create and train the model
  • drive.py for driving the car in autonomous mode
  • model.h5:containing a trained convolution neural network
  • writeup_report.md : summarizing the results
  • video.mp4: A video repcort.pdf summading of your vehicle drizving the resultsautonomously at least one lap around the track.

2. Submission includes functional code

Using the Udacity provided simulator and my drive.py file, the car can be driven autonomously around the track by executing

python drive.py model.h5 run1

Creates a video based on images found in the run1 directory.

python video.py run1

3. Submission code is usable and readable

The model.py file contains the code for training and saving the convolution neural network. The file shows the pipeline I used for training and validating the model, and it contains comments to explain how the code works.

二、 Model Architecture and Training Strategy

1. An appropriate model architecture has been employed

image

The design of the network is based on the NVIDIA model, which has been used by NVIDIA for the end-to-end self driving test. As such, it is well suited for the project.

It is a deep convolution network which works well with supervised image classification / regression problems. As the NVIDIA model is well documented, I was able to focus how to adjust the training images to produce the best result with some adjustments to the model to avoid overfitting and adding non-linearity to improve the prediction.

I’ve added the following adjustments to the model.

  1. I used Lambda layer to normalized input images to avoid saturation and make gradients work better.
  2. I used the cropping layer to drop the not so useful information.

In the end, the model looks like as follows:

  • Image normalization
  • Convolution: 5x5, filter: 24, strides: 2x2, activation: relu
  • Convolution: 5x5, filter: 36, strides: 2x2, activation: relu
  • Convolution: 5x5, filter: 48, strides: 2x2, activation: relu
  • Convolution: 3x3, filter: 64, strides: 1x1, activation: relu
  • Convolution: 3x3, filter: 64, strides: 1x1, activation: relu
  • Drop out (0.5)
  • Fully connected: neurons: 100
  • Fully connected: neurons: 50
  • Fully connected: neurons: 10
  • Fully connected: neurons: 1 (output)

2. Attempts to reduce overfitting in the model

  • The training data contains two tracks and counter-Clockwise driving data.
  • The model contains dropout layers in order to reduce overfitting.
  • The model was trained and validated on different data sets to ensure that the model was not overfitting. The model was tested by running it through the simulator and ensuring that the vehicle could stay on the track.

3. Model parameter tuning

The model used an adam optimizer, so the learning rate was not tuned manually .

4. Appropriate training data

  • Training data was chosen to keep the vehicle driving on the road. I used a combination of center lane driving, recovering from the left and right sides of the road.
  • For training, I used the following augumentation technique along with Python generator to generate unlimited number of images:
  • Randomly choose right, left or center images.
  • For left image, steering angle is adjusted by +0.4
  • For right image, steering angle is adjusted by -0.3
  • Randomly flip image

5. Training, Validation and Test

I splitted the images into train and validation set in order to measure the performance at every epoch. Testing was done using the simulator.

4108/4108 [==============================] - 350s - loss: 0.0582 - val_loss: 0.0444
Epoch 2/4
4108/4108 [==============================] - 338s - loss: 0.0186 - val_loss: 0.0393
Epoch 3/4
4108/4108 [==============================] - 372s - loss: 0.0110 - val_loss: 0.0372
Epoch 4/4
4108/4108 [==============================] - 349s - loss: 0.0080 - val_loss: 0.0362
![image_1d36lfh377d3rq0170l1rd0j0um.png-32kB][1]

Here is the chart for visualizing the loss:

image

三、Reflection

The performance for the track2 is not so good, maybe I need to increase the driving data of track2, and alter image brightness (lighter or darker) is a good idea.

Summarizing, this was a really interesting project,Deep learning is an exciting field.