Hey everyone! This week’s blog post is going to be relatively short, as I covered the mechanics and mathematics of how I was going to generate the adversarial examples of traffic sign images. This week was about creating a program that implements the method I described last week. Thankfully, I was able to generate some adversarial examples! I have included an image of them below:
The two images, put side-by-side like this, definitely shows how similar adversarial examples are to their original counterparts. However, it is easy to see that there are clear differences between the two, with the adversarial image looking more pixelated and adding more noise outside the red outline. This could be due to the fact that I did have some difficulty with creating and implementing the program. For a long while all of my adversarial images were coming out as greyscale images, which was definitely not what I was looking for. This was primarily due to the fact that the gradient descent model I was using to create the adversarial images would first preprocess the images by converting every image into greyscale to increase accuracy of the program, and the algorithm would then be trained with these greyscale images, thus outputting greyscale adversarial examples. I was not completely sure of how to convert a greyscale image to RGB values, so I had to remove the preprocessing method and instead work with the original colored images. This may have lead to the adversarial image being more pixelated and slightly different from the original image. If I have extra time on the project, I will definitely try to optimize the adversarial generation program further so that the original and adversarial examples look like carbon copies of each other. It could be as simple as changing the influence the original image carries on generating an adversarial example. Otherwise, I am very happy with how the adversarial examples turned out.
The next challenge for me is to generate about 30 images for each type of traffic sign, so about 1,200 images in total and compile them into a single folder of adversarial examples. This was supposed to be implemented sometime this past week, but because I had some difficulties in causing the adversarial image program the way I wanted it to, I had to leave this task for next week. It most likely will not take that long, though, as I am sure Python has some functions that will make the process quick and painless.
Stay tuned for next week, where I will be testing my machine learning algorithms on these adversarial images and see if there is a change in their classification accuracy! I’m very excited!
Until next time,