I have about 300 small- or medium-sized Node apps that my team has produced over the past few years which I'm trying to clean up and organize. Suffice it to say that my cherished assistants have not always meticulously used the --save flag with npm install, so the package.json files often do not reflect all the dependencies. Other times, they include packages that are not actually used because someone DID use --save and then changed his mind about needing that packages.
The apps all use the same filename conventions, so we can at least be thankful for that.
I could write a script that read the source code as a text file, used regex to look for require and import, and got the package names. I can deal with versioning myself. But this seems inelegant and inefficient.
I've noticed when I run webpack on a project that the compiler processes the code, detecting any illegal syntax and, more to the point, any import of a package that is not available because it wasn't installed.
Normally I'd chafe at the process of executing an unknown script, but since these are all scripts written by known entities, I'm not worried about malfeasance. I'm mainly unclear how it is that a program like webpack parses a .js file without necessarily executing it, and returns specific errors with line numbers.
I don't even necessarily need to automate the process of adding missing dependencies to the package.json file--many of the 300 apps are properly built. But it would still save me eons to quickly detect what's missing.
Does running a script to see if it works involve a VM? Or is it as simple as running the script from another script? Naturally, the apps themselves aren't packages, so just trying to require them wouldn't seem to work. Maybe it uses JSLint?