Also: observation interference in partially observable assistance games, mitigating uneven forgetting in harmful fine-tuning, filtering pretraining data builds tamper-resistant safeguards into open weight models, mechanistic view of how post-training reshapes models, mitigating reward hacking in external reasoning via backdoor correction, eliciting and analyzing emergent misalignment, safeguarding reasoning models with aha moments, estimating worst-case frontier risks of open weight models, attributing alignment failures to training-time belief sources
detecting unknown jailbreak attacks in vision…
Also: observation interference in partially observable assistance games, mitigating uneven forgetting in harmful fine-tuning, filtering pretraining data builds tamper-resistant safeguards into open weight models, mechanistic view of how post-training reshapes models, mitigating reward hacking in external reasoning via backdoor correction, eliciting and analyzing emergent misalignment, safeguarding reasoning models with aha moments, estimating worst-case frontier risks of open weight models, attributing alignment failures to training-time belief sources