Uncategorized

Test-Time Backdoor Attacks on Multimodal Large Language Models



"Large Language Models"Backdoor attacks are commonly executed by contaminating training data, such that a trigger can activate predetermined harmful effects during the test phase.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *