Hey everyone! Welcome to the first-ever Post #10 for the Attacks on Self-Driving Cars blog!
The last blog post I detailed about how in terms of progress, I was about a couple weeks behind due to the large obstacle I faced with the convolutional neural network’s accuracy. Originally, the plan was to use the principal component analysis algorithm on the neural network that generated adversarial attacks and generate a new batch of images to be used on the classifiers implemented with PCA. Thus, early in the week I started to work on implementing the algorithm into the adversarial image generator. However, I hit into another roadblock this week, involving this image I displayed from Blog Post #3:
With the supposed implementation of the PCA algorithm, the data will be reduced and reformatted. J, or the classification loss function, which minimizes the error in the neural network, however, must also change. There will be a smaller amount of dimensions to the data. Thus, with a change in dimensionality, there will be a change in how the error is calculated, so using the same loss function may result in an adversarial image that looks nothing like its original counterpart.
Thus, the question is, how must the loss function change? Unfortunately, I’m not sure I can answer this question properly. Due to the time constraints of the project, my external advisor suggested that I make the executive decision with the attack generator; rather than create entirely new adversarial images, he suggested to use the same adversarial images because my testing from Blog Post #6 showed that they were able to transfer effectively between one architecture to another, so it could be the case here.
Thus, I headed onto the next stage of the project-testing, leaving the question above for me to tackle in the future work I do in adversarial machine learning. I started with the logistic regression model, training with the original traffic sign images and testing it with the adversarial images. The classification accuracy for this model turned out to be about 71.2%, which is higher than the results from Blog Post #6. This is definitely a great sign!
Next week I will be testing the adversarial images on the convolutional neural network implemented with PCA. If the results are similar to the logistic regression model, it will show that PCA actually has a positive effect when acting as a defense on adversarial attack, which will be a great finding.
Strap in, everyone! We’re in the homestretch of the project!