DW live updates
千问则走了一条完全不同的路:30亿红包砸下去,不跟你讲技术,就讲生活。点奶茶、订车票、买年货,AI被塞进每一个日常缝隙里,“一句话下单”完成了近2亿次操作,光奶茶就免单5520万杯,购买鸡蛋超过3000吨,增幅940%不是白来的,直接让AI入侵14亿用户的消费习惯。阿里赌的是“实用化”,目前看,赌对了。
,更多细节参见safew官方下载
石油ETF鹏华(159697),场外联接(A:019827;C:019828;I:022861)。
第六十三条 承运人承担本章未规定的义务或者放弃本章赋予的权利的任何特别协议,经实际承运人书面明确同意的,对实际承运人发生效力;实际承运人是否同意,不影响此项特别协议对承运人的效力。
Abstract:Humans shift between different personas depending on social context. Large Language Models (LLMs) demonstrate a similar flexibility in adopting different personas and behaviors. Existing approaches, however, typically adapt such behavior through external knowledge such as prompting, retrieval-augmented generation (RAG), or fine-tuning. We ask: do LLMs really need external context or parameters to adapt to different behaviors, or do they already have such knowledge embedded in their parameters? In this work, we show that LLMs already contain persona-specialized subnetworks in their parameter space. Using small calibration datasets, we identify distinct activation signatures associated with different personas. Guided by these statistics, we develop a masking strategy that isolates lightweight persona subnetworks. Building on the findings, we further discuss: how can we discover opposing subnetwork from the model that lead to binary-opposing personas, such as introvert-extrovert? To further enhance separation in binary opposition scenarios, we introduce a contrastive pruning strategy that identifies parameters responsible for the statistical divergence between opposing personas. Our method is entirely training-free and relies solely on the language model's existing parameter space. Across diverse evaluation settings, the resulting subnetworks exhibit significantly stronger persona alignment than baselines that require external knowledge while being more efficient. Our findings suggest that diverse human-like behaviors are not merely induced in LLMs, but are already embedded in their parameter space, pointing toward a new perspective on controllable and interpretable personalization in large language models.