By "game loop", I'm assuming a loop which doesn't block (doesn't put the thread to sleep, e.g.) and has things to process regardless of whether there are new OS events. In Windows, that would be like an application that uses PeekMessage instead of GetMessage. PeekMessage is non-blocking and won't put the thread/process to sleep even if there are no events in the queue to process.
It's also only a relevant concept in multitasking environments. In the DOS days, practically every GUI application that didn't finish and close on its own without user input had a loop in the main entry point doing things like polling hardware for input. The idea definitely wasn't reserved primarily for games back then.
Game loops would be overkill for 99.999% of paint applications if not all, since there's nothing meaningful to process when there's no user input. If I go away from the computer with MS Paint open, it shouldn't be doing anything behind my back. It shouldn't take any CPU cycles at all. The only time the software has something to do is when I'm interacting with it, like clicking and dragging on the canvas.
Meanwhile games have to process all kinds of things going on many times per second even if you step away from the computer. Enemies will still continue to run at you, try to kill you, etc. regardless of whether you're pushing buttons or not. Unless the design is multithreaded at a broad design level, often the design that makes the most sense is simply avoid blocking functions in event processing (including blocking functions with timer events since those still tend to suffer from latency issues).
A non-game application which might benefit from the game-style loop might be a video player, at least while a video is playing. There it has many frames of video to render and audio to play even when you step away from the computer while playing your favorite movie. Ancient Win32 examples at least avoided blocking in their event processing to render the video that was constantly playing.
But - please correct me if I'm mistaken - it would be difficult to
repaint only the specific part of the screen where the new small line
was drawn. If so, a repaint of the entire screen would be necessary.
This is completely independent of game looping concepts, just efficient redrawing, but it's not so difficult to avoid redrawing everything any time anything changes, especially in a 2D context like this one.
A crude but very effective technique is partition your canvas into a grid, with each cell containing, say, 64x64 pixels. Whenever you modify a shape or insert or erase one, mark the cells of the grid they belong to as needing to be redrawn. For simplicity you can just look at the bounding rectangle of each shape instead of doing fancy intersection tests to figure out what cells it occupies.
Now when you render the results, just repaint the cells that were marked as needing to be redrawn. This is similar to marking each and every pixel that needs to be redrawn, except that tends to be too granular and requires working with too much data to the point where it could be just as slow or slower than redrawing everything. After all, if you're rendering lines with Bresenham, most of the cost is in figuring out which pixels to plot, so you wouldn't gain anything by marking each individual pixel the line drawing function has to paint yet again.
So instead we work with coarser grid cells that span, say, 64x64 pixels each to mark as needing to be redrawn or not, and work with the bounding rectangles of shapes to determine what cells they occupy. Simple diagram to illustrate:

Each 64x64 cell need store nothing more than a boolean to indicate whether or not it needs to be redrawn if it's only used for drawing purposes, and can simply be 1-bit.
This grid can also double over as a useful way to query what shape is under the mouse cursor without looping through every single shape in the entire canvas. If you use it to accelerate things like this, then each cell can be a singly-linked list pointer (which could be a 32-bit index). For 2D video games, it's also useful for collision detection.
It also allows your back buffer to simply be the size of a grid cell, like 64x64 (4096) pixels instead of having to match the entire size of the window, since you can just draw to a 64x64 image and blit it to the window. If you want to render multithreaded, then you still only need a 64x64 tile to use per thread before blitting it and returning it to the "tile/image bucket pool". That can really minimize memory use and make your painting application just take, say, 24 kilobytes in RAM as opposed to megabytes if you had a 2560x1440 (3,686,400) pixel back buffer for a maximized window. I'm not sure if people really care much about that these days. I'm still stuck to my old habits and still find a lot of aesthetic value in having applications that take kilobytes of RAM, barely any disk space, and open and close in the blink of an eye. I don't find that so time-consuming to achieve if you can come up with core data structures that efficiently fulfill many needs at once in with very little mem use as in this case.
A fancier solution is to allow each cell in the grid to subdivide as needed, at which point you get a quad-tree. However, in practice, I find grids to outperform quad-tree implementations at least for my needs which usually revolve around very dynamic inputs with millions of elements moving around all the time.