Finally, you also need to store the key ($1) that corresponds to each line-number (I'll use an array called linekeys for this, with the line-number as index and the key, $1, as the value). BTW, if the first file was so huge you have to process it a second time, then this array wouldn't be needed, as you can just get it from $1$1 as you process each line again. Technically, this array isn't really needed at all as you could split() it from lines[l] in the END{} block when you need it, but it's easier to do it this way - trading a bit more memory usage for simpler code and possibly faster run-time.
BTW, I'd recommend saving this in either a sh script as is (except using "$@" as the argument to awk instead of file1 file2 so you can specify the input lines on the command-line when you run it (e.g. as bash scriptname.sh file1 file2, OR saving it as an awk script (remove the awk command and the single-quotes and the filenames) so you can run it as awk -f scriptname.awk file1 file2. With an appropriate #! line as the first line of the script, you can also make it executable so you can run it directly without having to type the interpreter name on the commmand-line when you run it.
Or, if you really insist, you could squeeze the entire script onto one line - semi-colons have been left in place where needed between statements to allow for that. I wouldn't recommend it, though, as the shell command line is a terrible place to be editing scripts, even ones as short as this, and even with convenience features (like Ctrl-XCtrl-E in bash) to edit the current line in vi or your preferred editor.