The way we're all taught to profile is you install a profiler like dotTrace and then run your code under it. It shows you the hotspots and you dive in to that code to figure out how to optimize it. This is a great approach for many cases. I've used it myself a lot, especially back when I did game programming.
However, there are a lot of times where a much easier approach is equally effective. And for some cases, much more effective. It's fast, it's simple, and it gets the job done. What is it?
For a great many situations you will find that of the 10 times you broke into the program, it is in the same code each time. You now know where the code is spending its time. And you did this without installing a profiler, without running it, without having to dive into its results.
This approach is not a panacea. First off, a profiler will give you the actual lines of code that are taking the time, and how much on each line. This quick approach works great when it is a single line of code. The more spread out the issue is, the less helpful it is. Even if the time is all in one method, it can be certain parts of a loop for example that are sucking up the time – but it is various parts each taking significant time. There are many other cases where this is not the best approach.
But there are also some great advantages to this approach. With our reporting/docgen program a lot of the time the big hit is a single select in the report template. A profiler will show that it is the selects that are taking the time. But breaking into the debugger makes it easy to see what select on each break. If it is consistently the same 1 or 2 selects, then you know that what needs to be optimized is not executing a select, but a specific select. (Or a column in the database needs to be indexed.)
Another case we have seen at times is the hit is importing images or sub-reports from a specific server. Again, if on each break it is waiting for a file to be read from a website, and the break always shows that it is when hitting a specific website, then the problem clearly is not reading a file, but the response time of a specific server.
With very rare exceptions I always use the debugger break approach first. When that is sufficient (about 60% of the time), then I am working on the performance issue a minute or two after starting.