Great course. Easy to understand and with very synthetized information on the most relevant topics, even though some videos repeat information due to wrong edition, everything is still understandable.
I really enjoyed this course, it would be awesome to see al least one training example using GPU (maybe in Google Colab since not everyone owns one) so we could train the deepest networks from scratch
創建者 Xirui Z•
Too hard for someone new to tf.
創建者 Sanskar j•
Assignments can be made better
創建者 Jisheng L•
Need improvement on assignment
創建者 Pedro C•
notebook were not functional
創建者 Modassir A•
need improvement of content
創建者 Olatunji O•
Notebooks are a bit buggy
創建者 Yi-Hao K•
Serious bug in assignment
創建者 Yide Z•
too many errors in test
Labs should be tougher
創建者 Kenneth C V•
Very complex Subject
創建者 Pavao S•
Not enough theory
創建者 neda m•
創建者 Volker H•
too many bugs
so hard :(
創建者 Ryan W•
It was okay. Andrew is obviously very knowledgeable, and there is a wealth of knowledge here. I could go through it a couple more times and still pick up new stuff.
That being said, I've heard him mention he did these videos at like 1 or 2 in the morning after work, and it's very obvious from the videos. He makes so many mistakes that every other lecture (it seems like) has a **CORRECTION** notification next to it. I mean it's great they give this additional correction information, but it would be even better if you just redid the video.
Furthermore, he like stops in the middle of the videos and then repeats the last sentence he said, because he made another mistake. I get it, Andrew is very successful, he's very busy, and I am definitely grateful for the knowledge he's provided in this course. But this makes for a very poor learning experience, because I'm taking notes, and I have to go back and redo them, plus the general angst you get when you're learning something and someone's like "oh wait nope that's not right, forget that." Well for God's sake I already learned it.
Finally, the submission assignments are the most annoying things I have ever come across. They are riddled with errors and misguided information where they literally tell you to use the wrong parameters, and then they never fix it. You have to go into the discussions to find out why your code is wrong, even though you're doing it right.
Then, you'll get everything right on your code for the test cases, and when you go to submit it fails you. And when I say it fails you, it gives you a literally 0 out of like 30 points. And the grader output just says "your submission was incorrect" like no way, I had no idea. Thank you for that very **cough** helpful piece of info.
If you go to the discussions, you find out this is actually a problem with how the grader is built, because if you don't format your code exactly the right way, it fails you, even if your solution is correct. I don't understand why it can be right when you run test cases, but submitting it fails.
Overall, I give it 3 stars before the poor grading, but because of the poor grading performance I have to bring it down to 2. I can't tell you how much time I wasted trying to figure out why my code was wrong just to realize it was right, but they screwed up their implementation.
In conclusion, this reminded me of a college course, where the professor has a ton of knowledge and is in high demand, and doesn't really care whether you get anything out of the course or not. It's sloppy, doesn't seem to be maintained very well, and most of the mentor's responses are literally "did you look at your colleagues similar questions?" Like no I didn't, that's why I'm asking. Why am I paying you so I can spend more time debugging your screw ups? Or maybe I did and I still don't get it because your explanations are ridiculously unclear.
I have one more course in this specialization and I absolutely can't wait for it to get over with so i can move on to more productive (and immersive, since these exercises are just one off "do this then do that" instructions, I still don't know how to set up a Deep Learning project from scratch) ways to learn Deep Learning. If Andrew wasn't so knowledgeable about this topic, I wouldn't even take it because it's that bad. But really you can't get this type of knowledge in such a condensed form anywhere else.
創建者 Juan R•
I found it very easy to go through the assignments and the quizzes were great, but I do have 2 complaints: -- I didn't get quiz feedbacks (they seem to be disabled), so, this is a huge let down and I wasn't able to completely grasp the concepts. -- For example the Gram matrix I had to accept it was true when they said "if the filters are quite similar then the dot product will be high". Show this please? #mastery #selfcontained. -- Another example, on the programming assignment, on Neural Style transfer, it is POORLY explained how the framework works when it comes to setting a_G and a_C. Then it is said "this will be covered (explained) in the "model" function, which wasn't. -- I have printed most of the mentioned papers and I am starting to read them, I loved the fact you recommended papers on this lesson, and the rest of the programming assignments were great, especially when you would provide "Hint" to go to the docs and lookup the method, etc.
創建者 Jeff N•
I feel this is by far the weakest of the first 4 courses in the series. The information is really valuable and the homework offers almost no opportunities to actually explore CNN architectures. The homework is more about implementing a few parts of a dictated network where all of the critical information is provided. The only exercises are in more vector manipulation and knowledge of frameworks that are never talked about in the actual course material. I'd love real framework material and real opportunities to practice using them, but the limited exposure here does not cut it.
Basically, I listened to the videos talk about CNNs, answered quiz questions about minor foot notes in the lectures, and then messed with vectors again. Oh, and the video editing was pretty choppy in this course compared to the others. Disappointed.
創建者 Thomas D•
The material covered in the course is very good but the instructors really need to go back over the course materials (particularly the homeworks) and clean them up. Many of the links to the TensorFlow documents are out of date and link to missing information. These aren't necessarily updated in the forums either, which do not seem to have much of a TA presence anymore. It would be nice if the lectures & slides could be updated to incorporate the errata in the syllabus but I understand that could be a lot of work. However, it seems like it would be better to present the errata before the lectures in the syllabus. Admittedly its a small complaint but it seems like an easy fix and the fact that it hasn't been done says something about the amount of care put into maintaining the course.
創建者 Alexandre E•
Course is great, but there were several bug in the homework, including misleading tests.
In one, getting the right value (triplet loss) results in a failing grade, getting the wrong values (using help from the forum) get you to pass the test. In another test, there were corrupted files; one has to add a print statement in a helper function, learn what file is corrupted, rename it, reload the exercise, and voila, it works.
Clearly, graders should survey the forum more closely to address these issues. Hopefully it will be addressed soon, and these comments will become moot.
That aside, the quality of the videos and the insight provided by Andrew Ng are second to none, thanks for the outstanding instruction
創建者 Jacob T•
Felt compelled to review this particular course to voice my dissatisfaction. The course, as it stands right now, is rather poor in quality. The lectures contain several errors that are lazily corrected. Sections of video are incorrectly spliced together that chops up the flow. The programming assignments drop sharply drop in quality from the previous courses; they're pretty close to "type the stuff we tell you to type" at this point. Even at that, there's several errors in those assignments that require digging into the forums because the course instructors seem to lack quality control.
I quite enjoyed this specialization in courses 1-3, but this course has left quite a bad taste in my mouth.
創建者 Robert D•
While the content of the course is thought provoking and up to date, the overall quality is quite low. Videos are of moderate quality with very poor audio editing, and the programming exercises suffer from poor auto-graders. Regarding programming assignments, I spend most of my time trying to get just right combination of function calls despite getting exaclty the right answer in my tests. Typically this comes down to using just right numpy or tensorflow function, despite either one giving the same results. Overall, I wouldn't recommend taking this course for credit but rather simply extracting the relevant lessons and recommended readings.
創建者 Slobodan C•
The lectures are quite interesting, but the course should be at least twice as long to cover the CNNs with enough depth for a practical application. For the assignments, the Grader and the Notebook worked terrible compared to all the courses I took on Coursera so far. There were many discrepancies between the Notebook and the Grader- code matching the expected output in the Notebook would fail in the Grader etc. Starting about two days before the assignment deadlines, loading models into the Notebook would take 30-40 minutes, and crash most of the time, with unreadable error messages. Files got corrupted, sessions ran for hours...
創建者 Juan M•
As with other courses from Andrew, the lectures were great - easy to follow, clear explanations, great insights, lots of practical advice. The main reason for the lower than average rating is related to all the issues with doing the programming assignments. There seemed to be a larger than usual number of errors in the notebooks and one in particular (Week 4) had a problem with the grader that persisted for several weeks (if not still ongoing). In addition, several of the assignments didn't seem to really help in understanding the algorithms for CNN but instead concentrated on the minutae of the frameworks like tensorflow.