You haven't provided any context, so there is little to say about this particular design. For all we know from what you told us, this could be homework from a student that just made a mistake.
However, I have put resistors in parallel with LEDs in real professional designs. In all cases that I recall, the reason was that whatever was driving the LED had enough leakage so that the LED would be lit dimly when it was supposed to be off. A resistor in parallel effectively creates a minimum current threshold for the LED to be on at all.
For example, here is a schematic snippet of a real commercial product I designed last year:
In this case, I wanted D10 to indicate not only that the power supply 5VI was on, but that it was at least up to a minimum valid threshold. The purpose was to provide a quick visual indication that the power supply was up. I didn't want the LED to come on if the power supply was only at half its intended voltage, for example.
The TL431 (IC7) is used as the fixed voltage comparator. It turns on at 2.5 V on its control input. The supply being tested is nominally 5 V. R28 and R29 form a voltage divider so that IC7 is guaranteed to be on when the supply is fully up, accounting for slop in the supply voltage, slop in the TL431 threshold voltage, and the tolerance of the two resistors. By working the voltage divider backwards, you should be able to see that the nominal turn-on level for IC7 is when the 5VI supply is at 4.73 V.
Note that the TL431 is powered from the same output that it also switches. That means it always sinks some minimal current, even when "off". R26 was sized so that the maximum quiescent current thru IC7 causes too small a voltage drop for D10 to turn on visibly, even in relatively dark ambient conditions.
Humans perceive light brightness logarithmically, not linearly. Even though the LED might be on a fraction of a percent of its full brightness, that brightness will still be quite apparent in a dark room or cabinet.
You might think that the low brightness would be obviously different from the full brightness, but it can be very deceiving when in a dark environment and there isn't something with "full" brightness to compare it to. You really don't want to make it hard for your customer or your own field people to tell whether the supply is up or not when they're down in the bowels of a ship, with poor lighting, in cramped conditions, wearing hearing protection because they're next to the engine, as they're passing thru the Suez Canal with 115 °F outside and 130 °F inside. A resistor they'll never know was there will make sure the LED looks off when it's supposed to.
As with a lot of good design, the better it is, the less anyone else even knows there was an issue.
There may be another reason in your particular case:
Notice the two test points. The line they are connected to may be doing more than just driving the LEDs. The resistors will pull the line nicely high when the LEDs are off. The LEDs would otherwise provide very little current when the voltage goes high, and wouldn't pull the line all the way to the 5 V supply. Perhaps whatever is intended to be connected to these test points is supposed to see nice square wave signals. With the resistors, the square wave amplitude will be the forward voltage of the LEDs when on. Without them, the signal could be quite mushy.
I have used similar current-sink LED drivers, and not found them to have enough leakage to cause the issue I mentioned above. In my experience, the LEDs connected to them go nicely off when they are supposed to be off, so overcoming leakage is likely not the reason for the parallel resistors.