The other solutions you've been given are correct, understandable, and good Python, and they are reasonably performant if your set is small.
It is, however, possible to do what you want (at, of course, a considerable overhead in memory and setup time; TANSTAAFL) much more quickly using an index. And this index maintains constant performance no matter how big your data gets (assuming you have enough memory to hold it all). If you're doing a lot of looking up, this can make your script a lot faster. And the memory isn't as bad as it could be...
We'll build a dict in which the keys are every possible substring from the items in the index, and the values are a set of the items that contain that substring.
from collections import defaultdict
class substring_index(defaultdict):
def __init__(self, seq=()):
defaultdict.__init__(self, set)
for item in seq:
self.add(item)
def add(self, item):
assert isinstance(item, str) # requires strings
if item not in self[item]: # performance optimization for duplicates
size = len(item) + 1
for chunk in range(1, size):
for start in range(0, size-chunk):
self[item[start:start+chunk]].add(item)
seto = substring_index()
seto.add('C123.45.32')
seto.add('C2345.345.32')
print(len(seto)) # 97 entries for 2 items, I wasn't kidding about the memory
Now you can easily (and instantly) test to see whether any substring is in the index:
print('C' in seto) # True
Or you can easily find all strings that contain a particular substring:
print(seto['C']) # set(['C2345.345.32', 'C123.45.32'])
This can be pretty easily extended to include "starts with" and "ends with" matches, too, or to be case-insensitive.
For a less memory-intensive version of the same idea, look into tries.
x in y?partialInfunction that iterates over the list and does the comparison.intries to match the entire element.any()function.