What you are trying to do here is similar to what a database engine has to do when joining data from two tables together. A database engine will typically have a number of different join plans to choose from, and it will attempt to choose the best one based on what it knows about the data in each table.
The same applies to you. There are several ways to join the data and the best way will depend on factors such as the size of each of the input files, whether they are pre-sorted, etc.
Some possible approaches:
A 'Nested Loop', where you read each line of the enrolled_students.txt file and for each of those iterate through the other file(s) to find a match. Not likely to be very fast, you would probably only choose this if the files were too large to make any other solution practical.
A 'Hash Join', where you would read one half of the data to be joined (in your example, probably the name_of_country.txt) into a data structure indexed by a hash. Then for each row of the other file, you can look up the corresponding row in the hash. This can be quite high performance, as long as there is enough memory to store at least one of the two sets of data at once.
If both files are in some sorted order, sorted according to the same key, you might be able to use a 'Merge Join'. This is where you read rows from both files at once, matching the records together like teeth in a zipper.
The above assumes a simple case with two data files that have to be joined. Your question talks about 100 different name_of_country.txt files, which might complicate matters.
In regard to your second question - can you use parallel processing - that would probably only be useful if the processing was CPU-bound. The complexity of producing a forked or threaded solution is probably not warranted unless you find that it is actually CPU bound.
Finally - if you are doing multiple analysis runs of the same data, it might be advisable to import the data into a real database and use that run queries. That would save you a lot of coding work.
DBD::SQLite. It is self-contained (you don't have to install a separate database server) and will probably be much faster than anything you hack together yourself. Of course, you would have to load the initial set of data into your database first...do these text files change often?