Discovered paper pair (Session 38). Detailed explanation not available.
Feature disentanglement into domain-invariant and domain-specific components mitigates negative transfer in multi-dataset training. Cross-attention fusion adaptively models component interactions. Mutual information optimization maximizes domain-invariant consistency while minimizing domain-specific redundancy. Complementary shared-private representations enable knowledge transfer while preserving dataset-specific discriminability.
view paper→Dual-scope representation learning with contrastive regularization under distribution shift. System captures both local pairwise regulatory logic and global cross-context expression patterns via dual-head architecture. Contrastive learning enforces structural constraints in representation space, providing robustness against noise, sparsity, and cross-domain distributional shifts beyond what supervised signal alone provides.
view paper→