Ok, I am summarizing my comments in a proper answer.
You have two possible solutions:
- store them in NVARCHAR2/NCLOB columns
- re-encode JSON values in order to use only ASCII characters
1. NCLOB/NVARCHAR
The "N" character in "NVARCHAR2" stands for "National": this type of column has been introduced exactly to store characters that can't be represented in the "database character set".
Oracle actually supports TWO character sets:
"Database Character Set" it is the one used for regular varchar/char/clob fields and for the internal data-dictionary (in other words: it is the character set you can use for naming tables, triggers, columns, etc...)
"National Character Sets": the character set used for storing NCLOB/NCHAR/NVARCHAR values, which is supposed to be used to be able to store "weird" characters used in national languages.
Normally the second one is a UNICODE character set, so you can store any kind of data in there, even in older installations
2. encode JSON values using only ASCII characters
It is true that the JSON standard is designed with UNICODE in mind, but it is also true that it allows characters to be expressed as escape sequences using the exadecimal representation of their code points.. and if you do so for every character having a code point greater than 127, you can express ANY unicode object using only ASCII character.
This ASCII JSON string: '{"UnicodeCharsTest":"ni\u00f1o"}' represents the very same object of this other one: '{"UnicodeCharsTest" : "niño"}'.
Personally I prefer this second approach because it permits me to share easily these json strings also with systems using antiquate legacy protocols and also it allows me to be sure that the json strings are read correctly by any client regardless of its national settings (the oracle client protocol can try to translate strings into the character used by the client... and this is a complication I don't want to deal with. By the way: this might be the reason of the problems you are experiencing with SQL clients)
NLS_CHARACTERSET=AL32UTF8, i.e. UTF-8. Of course UTF-8 supports also single byte characters. Why do you like to use a single-byte characters set still in 2018?