In reality statements such as likely to be slow on a large database should be a red flag.
If you're going to have a large dataset, profiling and testing is vital to determine firstly if it will be a problem and then if it will be enough of a problem to warrant development time and cost to address. Usually this means micro optimisations that are unlikely to have any impact on most code bases.
Anyway, lets answer the question.
Yes, hypothetically as it's us using an index file, and if you have huge amounts of data and query this table a lot potentially it can be optimised by splitting the query into multiple execution sets rather than using expressions operators within the query, if you are only going to query twice as in your example you could achieve more performance with a union such as:
(
SELECT * FROM test
WHERE
(
col1 = 123 AND col2 = 456
)
)
UNION
(
SELECT * FROM test
WHERE
(
col1 = 456 AND col2 = 123
)
)
An EXPLAIN for this query is as follows:
ID SELECT_TYPE TABLE TYPE POSSIBLE_KEYS KEY KEY_LEN REF ROWS EXTRA
1 PRIMARY test ref PRIMARY PRIMARY 4 const 1 Using where; Using index
2 UNION test ref PRIMARY PRIMARY 4 const 2 Using where; Using index
(null) UNION RESULT <union1,2> ALL (null) (null) (null) (null) (null)
Take a look at this SQL fiddle http://sqlfiddle.com/#!2/9dc07a/1/0 for a simple test case.
The language I've used in this post such as "might", "could" etc is because I've not front loaded this example with hundreds of millions of records - I would strongly suggest you do this and evaluate and profile your query in more detail.
Unfortunately with optimisation, there isn't always a clear and simple answer of doing x to get greater performance - the query optimiser is a complex beast and sometimes trying to get every drop of performance can actually cripple your application (I'm speaking from experience here) so please, unless you have to worry about these micro optimisations - don't, if you do then evaluate, profile and test it fully before deciding on an approach.