Assuming your input JSON document is valid, like the following:
{
"fieldA": { "fieldData": "XYZ" },
"fieldB": { "fieldData": "PQR" },
"fieldC": { "fieldData": null },
"fieldD": { "fieldData": "DEF" }
}
Then you can start your jq expression by removing the parts that have a fieldData value that is null:
jq 'map_values(select(.fieldData != null))' file
To then access the index of each key in the resulting object, we can use to_entries twice and extract it along with the other data we're interested in:
$ jq -r 'map_values(select(.fieldData != null)) | to_entries | to_entries | map(.key+1, .value.key, .value.value.fieldData)[]' file
1
fieldA
XYZ
2
fieldB
PQR
3
fieldD
DEF
We use to_entries twice to access the index of the keys within the original object.
Note that the order of keys in JSON objects is not necessarily fixed. If you need a fixed ordering of things in a JSON structure, use an array instead. Maybe something like
[
{ "name": "fieldA", "fieldData": "XYZ" },
{ "name": "fieldB", "fieldData": "PQR" },
{ "name": "fieldC", "fieldData": null },
{ "name": "fieldD", "fieldData": "DEF" }
]
Using your original data from your previous related question (get key and value from json in array with check):
jq -r 'map(select(.name != "null")) | to_entries | map(.key+1, .value.name, .value.type)[]' file
or
jq -r 'map(select(.name != "null") | [.name, .type]) | to_entries | map(.key+1, .value[])[]' file
which is slightly closer to my answer to your other question.
Note the lack of initial call to to_entries compared to the above since we now don't have data in keys and therefore do not have the investigate the keys themselves.