Discovered paper pair (Session 38). Detailed explanation not available.
Feature disentanglement into domain-invariant and domain-specific components mitigates negative transfer in multi-dataset training. Cross-attention fusion adaptively models component interactions. Mutual information optimization maximizes domain-invariant consistency while minimizing domain-specific redundancy. Complementary shared-private representations enable knowledge transfer while preserving dataset-specific discriminability.
view paper→Transfer learning with structured graph representations enables few-shot adaptation across related domains by identifying shared interaction principles from well-studied systems and rapidly adapting to new contexts with minimal training data.
view paper→