The search for several tiles is a simple flood fill algorithm. You start at a point and move outward.
In this case you can stop the second you find the condition you want, so you don't even necessarily have extra computation time anyways.
Originally, this was n=1 for 5 checks (the tile and the 4 surrounding tiles)
Now it is n=5 for 41 checks (this accounts for the growth of the detection diamond, 1 + 3 + 5 + 7 + 9 + 7 + 5 + 3 + 1)
As we can see, at this scale, it is roughly n^2 in complexity, but for small n it is negligible. After all, a thread on your computer will likely process several hundred million of these checks every second.
Furthermore, because we can stop once we have found the conditions we want, likely we aren't doing any more checks than we used to in most cases. Add this to the fact that plants generally don't update every tick, but maybe once a second or longer, and this is not a frequent change.
One thing that is likely to be affecting long computation times is path-finding for long paths. This includes dupes, and unrestricted critters, i.e. critters who aren't confined to a small space. This generally comes about digging out huge rooms, and just letting everything fall. Now any dupe trying to path through this area likely has many possible routes to choose from and has to figure out the best one, and critters from that area have a larger space to wander.
Another thing that might be affecting long computation times is job priority sorting. The more jobs your dupes have, the more things they need to sort to decide what next to do. Good sorting algorithms are n*log(n) in complexity, which is better than n^2, but for larger n, this can become cumbersome. When your dupes have 32,000 hauling tasks from you digging out an entire area, it takes quite a few processor cycles for them to sort out their priorities upon getting a new task. Even if they keep a sorted list per dupe, adding a new item is at best n*log(n) time to integrate with some sorting algorithms, though this could be brought down to log(n)*m complexity with m being the number of dupes. In most cases here though, m would be small, and n would be the driving factor. This would be done by implementing priority as a skiplist, which has a log(n) search time, and 1 insertion time. From the way priorities are presented, I wouldn't be surprised if this wasn't already done though.
Ultimately though, the devs are the ones who can tell what areas they need to optimize, as they can implement profilers to report back to them how much time is spent on things. Many of the things I mention above are things that can be run concurrently as well, as the path one dupe takes is independent of the paths other dupes take, and same for priority lists. Even the plants can be done each in their own thread that updates at a set frequency. With a thread manager to handle these threads, you can easily utilize more cores than currently the game uses (seems to be close to 4.5 cores) and improve the performance.
The problem though lies in doing this, while ensuring that concurrency data problems do not occur. I.e. 2 dupes both sort their priorities and then select for the same task. This can be solved in other ways though which I won't go into.
In short, you are right that this would cause more processing to be done, but the amount of processing done by this compared to how much is already done is tiny. It's a flea on a giant. There are bigger fish for them to fry than to deny such a small request that fixes other problems.