Apart from the delays in receiving timely or helpful feedback on grades, this was an excellent course! It provided a solid introduction to foundational deep learning concepts. The practical components were particularly beneficial; at this stage in my journey, the step-by-step guidance provided in the video lectures was exactly what I needed to bridge the gap between theory and application.

Assignment 4: MNIST and Convolutional Neural Networks

This assignment challenged us to build, train, and test a Convolutional Neural Network (CNN) using the MNIST dataset. While the lectures provided a clear roadmap, implementing the code was rarely simple. It required constant tweaking and troubleshooting to ensure everything functioned correctly. Despite the hurdles, seeing the structure, flow, and internal processes of a model in action was incredibly insightful.

Assignment 5: Comparing Architectures (RNNs and GANs)

This assignment shifted from hands-on coding work toward a conceptual and research based exercise. I produced two reports comparing the CNNs from the previous assignment to Recurrent Neural Networks (RNNs) and Generative Adversarial Networks (GANs).

The distinctions are fascinating:

CNNs vs. RNNs: While CNNs excel at processing spatial correlations—analyzing data in grids from top-to-bottom and left-to-right—RNNs are designed for sequential data. They excel at predicting what comes next in a series, making them ideal for time-series or language tasks.

CNNs vs. GANs: While CNNs are built for detection and classification, GANs focus on creation. They learn patterns from existing data distributions to generate entirely new, realistic data points.

The Final Project: From MNIST to CIFAR-10

The final project extended our work with CNNs by moving from the simple MNIST digits to the more complex CIFAR-10 dataset. This required loading 10 diverse classes of color images, building a new model, and attempting to improve upon its performance.

The contrast in results was a great lesson in data complexity. The MNIST model hit 99% accuracy within just 2,500 steps. In comparison, the CIFAR-10 model only reached 70% accuracy after 10,000 steps. It was a clear demonstration of how dataset sophistication directly impacts model performance and training requirements.

Looking Ahead

This was undoubtedly an intense, fast-paced course, but the exposure to the world of deep learning has been invaluable. It’s a fascinating field that I look forward to exploring further. For now, however, it’s time to pivot back to my Capstone Project. Since we wrapped up a bit earlier than expected, I might actually get to enjoy a true break this Spring ‘Break’!