One common mistake is to introduce something that I call Coverage Killer. In some organizations Test Driven Development has been misunderstood as "creating a lot of tests to get good coverage" - this is wrong due to the fact that "you get what you measure". Imagine that some organizations have gone so far that they started to remove code to get better test coverage percent... duh. Well, you get what you measure ;) This is just part of the problem; when you (or your customer, Product Owner...) set something like "the minimum test coverage is 85%" you are about to introduce an overkill. Your development team will be writing tests just sake of coverage (not for ensuring that your business functionality works, which is far more important than coverage, eh).
Usually the team does not put Coverage Killer into their definition of done so to take care of this issue let your team define it's definition of done. Simple.
Another common mistake is to leave out refactoring from your definition of done. In practice I've seen several teams having "refactor
So how far you can go with your definition of done? I will get back to this so stay tuned :)