This is about table locks and streaming replication conflicts. My answer is about PostgreSQL and will apply to Amazon Aurora only to the (unknown) extent that it behaves like PostgreSQL.
pg_dump has to read the tables, and reading a table requires an ACCESS SHARE lock on the table. Such a lock conflicts only with activities like DROP TABLE, TRUNCATE, CLUSTER, VACUUM (FULL) and certain variants of ALTER TABLE. An ACCESS SHARE lock does not block writers, it prevents concurrent sessions from deleting the data file you are currently reading.
Now if you have a long running query like pg_dump on the standby and somebody TRUNCATEs a table on the primary, PostgreSQL will try to replay the statement and the associated ACCESS EXCLUSIVE lock on the standby. This will conflict with the long running query, and if pg_dump is not done after max_standby_streaming_delay has passed, the query is canceled and pg_dump terminates with an error.
Note that the conflicts need not be with one of the above statements: if autovacuum processes a table on the primary, and the last couple of pages in the table become empty, VACUUM will try to remove these pages, which also requires a brief ACCESS EXCLUSIVE lock on the table. This does not disrupt processing on the primary, but may lead to queries being canceled on the standby.
Set max_standby_streaming_delay to -1 on the standby server to avoid the problem.
Here is an article that deals with the problem in more detail.