nuntium

ACL 2024 Awards: One of the best papers on the periphering Oracle at HuaTech, GloVe Time Test Award

2024-08-15

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Apparatus Cordis Report

Machina Cordis Editorial Department

Multum ex hoc colloquio ACL contulerunt.

Dies sex dies ACL 2024 in Bangkok, Thailandia habetur.



ACL Summum colloquium internationale in agro computationale linguisticorum et linguarum naturalium processus. ACL semper primus in influxu academico in agro NLP deputatus est, et etiam in colloquio CCF-A commendatus est.

Hoc anno ACL colloquium est sexagesimus et plus quam 400 opera in acie incisa in NLP accepit. Heri pomeridiano colloquio optimam chartam et alia praemia pronuntiavit. Hoc tempore, 7 Optima Charta Praemia (duo inedita), 1 Best Theme Paper Award, et 35 Praecipuae Chartae Praemia Praemia sunt considerata.

Colloquium etiam consideratum 3 Resource Awards, 3 Social Impact Awards, et 2 Tempus Test Awards.

Praeterea in colloquio lacus vitae consecutio Radulfo Grishman, professori in Department of Computer Scientia in Universitate Novi Eboraci consecuta est.

Haec est certae notitiae laudum.

optimum charta



Paper 1: Mission: Impossibile Language exemplum

  • 作者: Julie Kallini, Isabella Papadimitriou, Richard Futrell, Kyle Mahowald, Christopher Potts
  • Institution: Stanford University, California, Irvine, University of Texas at Austin
  • Paper link: https://arxiv.org/abs/2401.06416

Introductio ad chartam: Chomsky et alii putant eruditionis facultatem exempla magnae linguae (LLM) eandem esse pro linguis quae ab hominibus discere possunt vel non possunt. Attamen documenta experimenta parum evulgata sunt ad hoc confirmandum.

Studium linguarum syntheticarum variae complexionis evolvit, quae singulae notae Anglicanae systematice mutandae utentes verbis innaturalibus ordinem et regulas grammaticas, eo consilio, linguas syntheticas quae hominibus discere non possunt.

Studium perpensa experimenta ampla aestimationis ad facultatem GPT-2 parvi exemplaris perpendendi ad has discendas "linguas impossibilis" perduxit et has aestimationes per varias aetates in disciplina perduxit ad processum discendi pro unaquaque lingua comparandum. Medium studii inventum est quod GPT-2 difficile est ad discendum "lingua impossibilis" comparata Anglicis, ius Chomsky et aliis provocans.

Potius, studium sperat quod accessus eius fructuosam inquisitionis rationem aperiet, sinit varias architecturas LLM probare in variis "linguis impossibilibus" ad intelligendum quomodo LLM uti instrumentum cognitionis et typologici possit.



Paper 2: Cur functiones sensitivas difficiles pro Transformers?

  • Author: Michael Hahn, Mark Rofin
  • Institution: Saarland University
  • Paper link: https://arxiv.org/abs/2402.09963

Abstract: Studia experimentalia varias discendi bias et limitationes transformantium notaverunt, ut assidua difficultas discendi ad computandos simplices linguas formales, ut PARITY, et inclinatio ad munera humilitatis gradus. Attamen intellectus theoricae limitatus manet, et exsistentes theoriae repraesentationis vel praesumunt vel aestimant facultates discendi realisticas.

Hoc studium demonstrat sub transformatore architecturae damnum landscape limitatum sensitivum spatii initus: transformatores, quorum outputes sentiuntur ad multas partes input chordae, puncta solitaria in spatio parametri collocantur, ex humili sentiendi studio in generalization .

Hoc studium theoretice et experimento demonstrat quod theoria amplas experimentales observationes coniungit de transformantis ingeniorum ac bivium doctrinarum, ut earum generalitas studium ad sensum et gradum humiliet ac difficultatem pari-longitudinem generalitatem. Hoc insinuat quod intellectus transformantis bias inductivas non solum suum principium exprimendi studium requirit, sed etiam munus suum detrimentum landscape.



Paper 3: Deceptihering Oracle Bone Language cum diffusione in exemplum

  • Auctores: Haisu Guan, Huanxin Yang, Xinyu Wang, Shengwei Han, etc.
  • Institutiones: Universitas Scientiae et Technologiae Huazhong, Universitas Adelaidis, Universitas Normalis Anyang, Universitas Technologiae Sinarum Australis
  • Paper link: https://arxiv.org/pdf/2406.00684

Introductio ad chartam: Oraculum Bone Script (OBS) in Shang dynastia Sinarum abhinc circiter 3,000 annos orta est. Etsi inscriptionum millia reperta sunt, magna vis oraculorum ossa indiscreta manent, antiqua lingua mysterii integumento occultata. Recentium AI technologiae cessum aperuit novas regiones pro Oraculo perspiciendi, provocationes collocandi ad methodos traditionales NLP, quae in magnis textuum corporibus premebantur.

Haec charta novam methodum technologiae generationis imaginis utens ad explicandum exemplum diffusionis optimized pro Oraculo definiendo, Oraculum Bone Script Decipher (OBSD). Conditionali diffusione consili adhibitis, OBSD magnas causas pro oraculo explicandas generavit et novas directiones aperuit pro analysi antiquarum linguarum AI-asistentium. Ad efficaciam comprobandam, inquisitores magna experimenta in notitia posita Oraculi deduxerunt, et effectus quantitatis OBSD efficaciam probaverunt.



Paper 4: Causalis æstimatio memoriæ profiles

  • Pietro Lesci, Clara Meister, Thomas Hofmann, Andreas Vlachos, Tiago Pimentel .
  • Institution: University of Cambridge, ETH Zurich
  • Paper link: https://arxiv.org/pdf/2406.04327

Introductio ad chartam: Intellectus memoriae in linguarum exemplaribus implicationes practicas et sociales habet, sicut perscrutatio exemplorum dynamicorum disciplinarum vel praeiudicio librariorum praeveniens. Prior investigationis memoriam definit tamquam relationem causalem inter "exemplum utendi instantia" et "possibilitatem exemplaris illius exempli praedicendi". Haec definitio counterfactuali nititur: facultas animadvertendi quid futurum esset, si exemplum exempli causa non vidisset. Methodi exsistentes nituntur ad computationes efficientes et accuratas opiniones talium counterfactualium. Ceterum hae methodi typice memoriam exemplaris architecturae aestimant quam memoriam instantiarum specificarum exemplarium.

Haec charta magnum intervallum implet, proponens novam, principiatam et efficacem accessum ad memoriam aestimandam secundum differentiam oeconomicam-in-differentiae designationis. Hac ratione investigatores solum mores exemplaris observant in paucis exemplis in omni processu exercitationis ad describendam formam memoriae exemplaris, id est, eius flecte memoriam in processu disciplinae. In experimentis adhibitis Pythiae exemplar suitis, memoriam invenerunt (i) in exemplaribus majoribus firmiorem et pertinaciorem, (ii) per ordinem datam ac cognitam determinatam, et (m) stabilis per diversas magnitudines exemplares memorias in majori exemplari ex minore exemplari cognosci potest.



Paper 5: Aya Model: Instructio Finetuned Open-Access Multilingual Language Model

  • Author: Ahmet Üstün, Viraat Aryabumi, Zheng Xin Yong, Wei-Yin Ko, etc.
  • Institutiones: Cohere, Brown University, etc.
  • Paper link: https://arxiv.org/pdf/2402.07827

Introductio ad chartam: Recentes breakthroughs in magna exempla linguarum (LLMs) notavimus parvum numerum linguarum e notitiarum ditium. Quomodo viae foris perrumpendi ultra alias linguas dilatari possunt? Investigatio inducit Aya, exemplar linguae generativae multilinguale, quod sequitur instructiones pro CI linguis, plus quam L% quarum copia humilis aestimatur. Aya outerforms mT0 et BLOOMZ in plerisque operibus bis totidem linguis obtegens.

Accedit, investigationes amplam litem novorum censibus inducit, qui statum-the-artem in aestimatione multilingui ad 99 linguas extendunt. Studium denique accuratam investigationem praebet compositionem, compositionem, compositionem, putationem, exemplar toxicity, studium, salutemque accuratam praebet.



Paper 6: Semisupervised Neural Proto-Language Reconstruction

  • Author: Liang Lu, Peirong Xie, David R. Mortensen
  • Institution: CMU, University of Southern California
  • Paper link: https://arxiv.org/pdf/2406.05930

Ratio iudicandi: Haec investigationes fundamento intendit ad semi-automandum munus prototypi linguae reconstructionis in historicis linguisticis, novam architecturam semi-praepositorum proponens. Haec methodus methodos praevias praevisas format, inducendo "prototypum linguam nativum" processus reflexionis in "prototypum linguae vulgaris" reconstructionem. Haec charta bonum exemplum est exemplorum computationalium modernorum, ut encoders ac decoders neuralis, linguisticis conferre possunt.



Paper 7: Satisfiability Natural Language: Explorans Problema Distributio et Aestimans Transformer-fundatur Exemplaria Linguae (inedita)

  • 作者:Tharindu Madusanka、Ian Pratt-Hartmann、Riza Batista-Navarro

Ratio iudicandi: Haec charta clare describit syntheticam aestimationem dataset ad consequentiam logicam. Hoc bonum est complementum magnae illationis notitiastarum ubi non liquet qua facultates metiantur. Cogitatione quidem causae sunt, ut aliqua copia subsidiorum aliis difficiliora expectetur, et hae exspectationes in charta convalidantur. Auctores in singulis categoriis sedulo operam dant ut casus illos vere provocantes adhibeant.

Tempus Exertus Award

ACL Time Test Award praemia chartarum honorariarum quae longum tempus ictum in campis linguae naturalis processui et linguisticis computatoriis habuerunt of two papers are awarded each year.



Paper 1: GloVe: Global Vectors for Word Representation

  • Jeffrey Pennington, Richard Socher, D. Christophorus Manning
  • Institution: Stanford University
  • Paper link: https://aclanthology.org/D14-1162.pdf

Introductio: Methodi ad cognoscendum spatium vectoris repraesentationes vocum prosperatae sunt in capiendis regulis subtilibus semanticis et syntacticis utentes vector arithmetica, sed regulae syntacticae opaca manent. Hoc studium analysibus enucleat ac declarat quae proprietas exemplar habere debet in ordine ad normas syntacticas ut in verbo vectores appareant.

Hoc studium novum regressionis globalis linearis exemplar proponit - GloVe, quae disposuerat ad discendum vectorem repraesentationum verborum. Hoc exemplar coniungit commoda factorisationis globalis matricis et contextus fenestrae localis modi.

GloVe optimam consecutionem 75% verbi analogiae molis consecuta est et exempla in verbo similitudinis molis effecta et entis agnitio nominata.

Ratio iudicandi: Verbum embledings erat lapis angularis methodi discendi altae pro processus linguae naturalis (NLP) ab 2013 ad 2018 et pergo ad vim significantem. Non solum opera NLP augent, sed etiam significantem ictum in semanticis computationalibus, ut verbi similitudinem et analogiam. Duae methodi gravissimi verbi inclusae probabiliter omit-gram/CBOW et GloVe. Comparatus cum omit-gram, GloVe postea proponebatur. Commodum relativum eius in simplicitate rationis consistit, optimizing vectoris spatii similitudinem directe fundatam in distributione notarum inter verba, potius quam indirecte ut ambitum statutum ex simplicitate linguae exemplare prospectu.





Paper II: Mensurae Distributionis Similitudo

  • Author: Lillian Lee
  • Institution: Cornelii Universitatis
  • Paper link: https://aclanthology.org/P99-1004.pdf

Paper introduction: Auctor studiorum distributionem similitudinum mensurarum cum intentione probabilitatis emendandae opiniones invisibilium eventuum co- cessus. Eorum collatio triplex est: comparatio empirica amplis remediis;



Vita ADEPTIO Award

Vita ACL adeptione Award Radulpho Grishman donata est. Ralph Grishman est professor in Department of Computer Science in Universitate New York, investigatio in campo processus linguae naturalis (NLP). Ipse est conditor Protei Project, qui notabiles contributiones ad informationes extrahendas fecit (IE) et progressum agri promovit.



Etiam Javam extractionem Toolkit (JET), instrumenti extractionis notitiae late adhibitam, quae multiplicem analysin linguarum compositam praebet, ut sententiarum segmentatio, entitatis annotationem, expressionem temporalem annotationem et ordinationem, partem sermonis tagging, partim parsingem et coagmentationem praebet. Analysis. Haec membra componi possunt in pipelines secundum varias applicationes, quae adhiberi possunt ad analysim interactivas sententiarum singularium vel analysin integrarum documentorum. Praeterea, JET instrumenta simplicia documentorum annotationem et ostentationem praebet, et totum processum includit ut entia, relationes et eventus ex ACE (Automatic Content Extraction) specificationem extrahat.

Professoris Grishman opus multiplicis nuclei quaestiones in NLP comprehendit et altam ictum in hodiernae linguae technologiae processui habuit.

35 papers excellentes

  • Paper 1: Quantitas Side Tuning: Fast and Memoria-Eficiens Tuning of quantized Large Language exemplum
  • 作者: Zhengxin Zhang, Dan Zhao, Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Qing Li, Yong Jiang, Zhihao Jia
  • Institutiones: CMU, Tsinghua University, Pengcheng Laboratorium, etc.
  • Paper link: https://arxiv.org/pdf/2401.07159
  • Paper 2: L-Eval: Instructing Standardised Evaluation for Long Context Language Models
  • 作者: Chenxin An, Shansan Gong, Ming Zhong, Xingjian Zhao, Mukai Li, Iun Zhang, Lingpeng Kong, Xipeng Qiu
  • Institutiones: Universitas Fudan, University Hong Kong, Universitas Illinois apud Urbana-Champaign, Shanghai AI Lab
  • Paper link: https://arxiv.org/abs/2307.11088
  • Paper III, Causalis-Activae Doctrina pro Debiasing magna Language exemplum
  • Paper link: https://openreview.net/forum?id=idp_1Q6F-lC
  • Paper 4: CausalGym: Benchmarking methodi causales interpretabilitas in operibus linguisticis
  • Author: Aryaman Arora, Dan Jurafsky, Christopher Potts
  • Institution: Stanford University
  • Paper link: https://arxiv.org/abs/2402.12560
  • Paper 5: Noli hallucinare, abstine: cognoscens LLM scientia hiatus per Multi-LLM Collaborationem
  • Shangbin Feng, Weijia Shi, Yike Wang, Wenxuan Ding, Vidhisha Balachandran, Yulia Tsvetkov
  • Institutiones: University of Washington, University of California, Berkeley, Hong Kong University of Science and Technology, CMU
  • Paper link: https://arxiv.org/abs/2402.00367
  • 论文 6:Speech Translation with Oratio Foundation Model and Large Language Model: Quid est et quid est?
  • Author: Marcus Gaido, Sara Papi, Matteo Negri, Luisa Bentivogli
  • Institution: Bruno Kessler Foundation, Italy
  • Paper link: https://arxiv.org/abs/2402.12025
  • Paper VII: Num NLP extrahendi?
  • Author: Steven Bird
  • Institution: Charles Darwin University
  • Charta pagina: https://drive.google.com/file/d/1hvF7_WQrou6CWZydhymYFTYHnd3ZIljV/view
  • Paper 8: IRCoder: Repraesentationes intermediae Fac exemplum Robustum Multilingual Codicis Generators
  • Author: Indraneil Paul, Goran Glavaš, Iryna Gurevych
  • Institutiones: Universitas technica Darmstadiensis, etc.
  • Paper link: https://arxiv.org/abs/2403.03894
  • Paper 9: MultiLegalPile: A 689GB Corpus Legale Multilingual
  • Author: Matthias Stürmer, Veton Matoshi, etc.
  • Institution: University of Bern, Stanford University, etc.
  • Paper link: https://arxiv.org/pdf/2306.02069
  • 论文10:PsySafe: Comprehensiva Framework pro Psychologico-fundatur Impetum, Defensio, et Aestimatio Multi agentis Ratio Safety
  • 作者: Zaibin Zhang Yongting Zhang Lijun Li 、 Hongzhi Gao Lijun Wang Huchuan Lu 、 Feng Zhao Yu Qiao、Jing Shao
  • Institutiones: Shanghai Artificialis Intelligentia Laboratorium, Universitas Dalian Technologiae, Universitas Scientiae et Technologia Sinarum
  • Paper link: https://arxiv.org/pdf/2401.11880
  • 11: Magnae linguae exemplum esse bonum fautorem affectuum? Mitigando praeferre Bias in Colloquium motus Support
  • Author: Dongjin Kang, Sunghwan Kim, etc.
  • Institution: Universitatis Yonsei, etc.
  • Paper link: https://arxiv.org/pdf/2402.13211
  • 论文 12: Political Compass or Spinning Arrow? Versus significantius aestimationes pro valores et opiniones in magna lingua exemplum
  • Author: Paulus Röttger, Valentinus Hofmann, etc.
  • Institutiones: Bocconi Universitas, Allen Institutum de Intelligentia Artificiali, etc.
  • Paper link: https://arxiv.org/pdf/2402.16786
  • Chartae XIII: Idem opus, plura signa: Impetus potenti Longitudo in ratiocinatione euismod magnae linguae exemplum
  • Author: Mosh Levy, Alon Jacoby, Yoav Goldberg
  • Institution: Universitas Bar-Ilan, Allen Institute for Intelligence Artificialis
  • Paper link: https://arxiv.org/pdf/2402.14848
  • Paper 14: Do Llamas Opus Anglice?
  • Author: Christophorus Wendler, Veniamin Veselovsky, etc.
  • Institution: Ecole Polytechnique Fédérale de Lausanne
  • Paper link: https://arxiv.org/pdf/2402.10588
  • Paper XV: Questus Gravis de Humor: Crafting Humor Datasets with Unfunny Large Language Models
  • Author: Zacharias Horvitz, Jingru Chen, etc.
  • Institution: Columbia University, Ecole Polytechnique Fédérale de Lausanne
  • Paper link: https://arxiv.org/pdf/2403.00794
  • Paper 16: Aestimans gradum Dialectica- ris Praedicts Inter Annotator Agreement in Multi-dialecti Arabica Datasets
  • Author: Amr Keleg, Walid Magdy, Sharon Goldwater
  • Institution: University of Edinburgh
  • Paper link: https://arxiv.org/pdf/2405.11282
  • Paper 17: G-DlG: Instructio Gradiente Substructio Dlverse et qualitas Instructio pro Machina Translation
  • Xingyuan Pan, Luyang Huang, Liyan Kang, Zhicheng Liu, Yu Lu, Shanbo Cheng
  • Organization: ByteDance Research
  • Paper link: https://arxiv.org/pdf/2405.12915
  • Paper 18: Media Framing: A typology and Survey of Computational Approaches Across Disciplines
  • Author: Yulia Otmakhova, Shima Khanehzar, Lea Frermann
  • Paper link: https://openreview.net/pdf?id=9AV_zM56pwj
  • Paper XIX: SPZ: A Semantic Perturbationem-fundatur Data Augmentation methodo cum Zonal-mixtione pro Alzheimer Morbus Detectio
  • Author: FangFang Li, Cheng Huang, PuZhen Su, Jie Yin
  • Paper XX: Avaritia omnia opus est: An Aestimatio Tokenizer Consequentia Methodi
  • Institutiones: Universitas Negoviensis Ben-Gurion, MIT
  • Author: Amri Uzan, Craig W.Schmidt, Chris Tanner, Yuval Pinter
  • Paper link: https://arxiv.org/abs/2403.01289
  • XXI Language complexionem ac Recognitio Accuracy: Orthographic complexionem Dig, Phonological complexionem non
  • Institution: Universitas Notre-Dame (USA)
  • Author: Chihiro Taquchi, David Chiang
  • Paper link: https://arxiv.org/abs/2406.09202
  • Paper 22: Steering Llama 2 via Contrastive Activation Addition .
  • Institutiones: Anthropica, Harvard University, Universitas Göttingen (Germania), Centrum pro Humano Compatible AI.
  • Nina Rimsky、Nick Gabrieli、Julian Schulz、Meg Tong、Evan J Hubinger、Alexander Matt Turner
  • Paper link: https://arxiv.org/abs/2312.06681
  • Paper 23: EconAgent: Large Language Model-agentia pro simulando Macroeconomic Activities
  • Institution: Tsinghua University-Shenzhen International School Graduate, Tsinghua University
  • Author: Nian Li, Chen Gao, Mingyu Li, Yong Li, Qingmin Liao
  • Paper link: https://arxiv.org/abs/2310.10436
  • 24:M4LE: A Multi Facultates Multi-range Multi Negotium Multi Domain Long-Context Aestimatio Probatio pro Large Language exemplum
  • Institutiones: University of Hong Kong, Huawei Arca Noe Laboratorium, Universitas Scientiae et Technologiae Hong Kong
  • Wai-Chung Kwan、Xingshan Zeng、Yufei Wang、Yusen Sun、Liangyou Li、Lifeng Shang、Qun Liu、Kam-Fai Wong
  • Paper link: https://arxiv.org/abs/2310.19240
  • Paper 25: CHECKWHY: Causalis Fact comprobatio per argumentum Structure
  • Jiasheng Si、Yibo Zhao、Yingjie Zhu、Haiyang Zhu、Wenpeng Lu、Deyu Zhou
  • Paper 26: De Efficiens et Statistical Quality Estimation for Data Annotation
  • 作者:Jan-Christoph Klie,Juan Haladjian,Marc Kirchner,Rahul Nair
  • Institutiones: UKP Lab, TU Darmstadt, Apple
  • Paper link: https://arxiv.org/pdf/2405.11919
  • Paper 27: Emulated Disalignment: Safety Alignment for Large Language model May Backfire!
  • 作者:Zhanhui Zhou, Jie Liu, Zhichen Dong, Jiaheng Liu, Chao Yang, Wanli Ouyang, Yu Qiao
  • Organization: Shanghai Artificialis Intelligentia Laboratory
  • Paper link: https://arxiv.org/pdf/2402.12343
  • Paper 28: IndicLLMSuite: Blueprint for Parting Pre-training and Fine-Tuning Datasets for Indian Languages
  • Author: Mohammed Safi Ur Rahman Khan, Priyam Mehta, Ananth Sankar, etc.
  • Institutiones: Nilekani Centrum apud AI4Bharat, Institutum Indicum Technologiae (Madras), Microsoft, etc.
  • Paper link: https://arxiv.org/pdf/2403.06350
  • Paper 29: MultiPICo: Multilingual Perspectivist lrony Corpus
  • Author: Silvia Casola, Simona Frenda, Soda Marem Lo, Erhan Sezerer, etc.
  • Institutiones: Universitas Taurinensis, Aequa-tech, Centrum Progressus Amazon (Italia), etc.
  • https://assets.amazon.science/08/83/9b686f424c89b08e8fa0a6e1d020/multipico-multilingual-perspectivist-irony-corpus.pdf
  • Paper 30: MMToM-QA: Multimodal Theoria Mentis Quaestione Respondens
  • Auctor: Chuanyang Jin, Yutong Wu, Jing Cao, jiannan Xiang, etc.
  • Institutiones: University of New York, Harvard University, MIT, University of California, San Diego, University of Virginia, Johns Hopkins University
  • Paper link: https://arxiv.org/pdf/2401.08743
  • Charta XXXI: MAP nondum mortua est: detegendo modos verae linguae exemplar a conditione degenerantia
  • Author: Davis Yoshida, Kartik Goyal, Kevin Gimpel
  • Institution: Toyota Institutum Technologiae Chicago, Georgia Institutum Technologiae
  • Paper link: https://arxiv.org/pdf/2311.08817
  • Paper 32: NounAtlas: Replens Gap in Nominal Semantic Munus Labeling
  • Author: Roberto Navigli, Marco Lo Pinto, Pasquale Silvestri, etc.
  • Paper 33: Terra plana est quia .. lnvestigating LLMs' Fides versus Misinformation per PersuasiveConversation
  • Author: Rongwu Xu, Brian S. Lin, Shujian Yang, Tiangi Zhang, etc.
  • Institutiones: Universitas Tsinghua, Universitas Shanghai Jiao Tong, Universitas Stanford, Universitas technica Nanyang
  • Paper link: https://arxiv.org/pdf/2312.09085
  • Paper XXXIV: Eamus Verus Talk: Dialogus Model pro facie ad faciem Colloquium
  • Author: Se Jin Park, Chae Won Kim, Hyeongseop Rha, Minsu Kim, etc.
  • Institution: Corea Provectus Institutum Scientiae et Technologiae (KAIST)
  • Paper link: https://arxiv.org/pdf/2406.07867
  • Paper 35: Word Embeddings are steering for Language Models
  • 作者: Chi Han, Jialiang Xu, Manling Li, Yi Fung, Chenkai sol, Nan Jiang, Tarek F. Abdelzaher, Heng Ji
  • Institution: University of Illinois at Urbana-Champaign
  • Paper link: https://arxiv.org/pdf/2305.12798

Best Theme Paper Award



Thesis: OLMo: Accedens Scientia Exemplar Linguae

  • Author: Dirk Groeneveld, Iz Beltagy, etc.
  • Institutiones: Allen Institute for Intelligence Artificial, University of Washington, etc.
  • Paper link: https://arxiv.org/pdf/2402.00838

Citation: Hoc opus maximus est gradus ad perspicuitatem et reproducibilitatem in magnarum linguarum institutione exemplorum, gradum promovendi in communitatis nisus (vel saltem ut alii inquisitores qui non sunt industria gigantes ad conferendum).

Resource Paper Award

3 papers vicit Resource Paper lacus.

Paper 1: Latxa: An Open Language Model and Aestimatio Suite pro Vasconia

Institution: University of the Basque Country, Spain

  • Julen Etxaniz、Oscar SainzNaiara Perez、Itziar Aldabe、German Rigau、Eneko Agirre、Aitor Ormazabal、Mikel Artetxe、Aitor Soroa
  • Link: https://arxiv.org/pdf/2403.20266

Rationes iudicandi: Haec charta singillatim describit singula corporis collectionis ac aestimationem datam. Etsi ad investigationem linguarum Vasconicam pertinet, haec methodologia ad magna exempla fabricanda extendi potest ad alias linguas low-resources.

Paper 2: Dolma: an Open Corpus of Three Trillion Signa for Language Model Pretraining Research

  • Institutiones: Allen Institute for Intelligence Artificial, University of California, Berkeley, etc.
  • Author: Luca Soldaini, Rodney Kinney, etc.
  • Link: https://arxiv.org/abs/2402.00159

Ratio iudicandi: Haec charta momentum monstrat administrationis notitiae cum notitias parat parandas ad exempla magna linguarum exercenda. Hoc valde pretiosum est pervestigationes amplis hominum in communitate.

Paper III, AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agentia

  • Institutiones: Universitas Civitatis Novi Eboraci apud Stony Brook, Allen Institutum pro Intelligentia Artificiali, etc.
  • Author: Harsh Trivedi, Tushar Khot, etc.
  • Link: https://arxiv.org/abs/2407.18901

Causae iudicationis: Haec investigatio est maximum et mirabile opus in aedificatione interactive environment simulationis et aestimationis. Omnes hortetur ut plus laboris dynamica pro communitate benchmarks efficiant.

Social Impact Award

3 papers lucratus est Social Impact Award.

论文1: Quomodo Johnny persuadere potest LLMs ut Jailbreak illis: recogitans Suada provocare AI Safety ab humanioribus LLMs

  • Auctores: Yi Zeng, Hongpeng Lin, Jingwen Zhang, Diyi Yang, etc.
  • Institutiones: Virginia Tech, Renmin University of China, University of California, Davis, Stanford University
  • Paper link: https://arxiv.org/pdf/2401.06373

Ratio iudicandi: Hic articulus explorat locum securitatis AI - jailbreaking, studens methodum in campo investigationis socialis scientiae effectam. Investigatio valde interesting et potentia habet notabilem ictum in communitate.

Paper 2: DIALECTBENCH: A NLP Probatio pro Dialectis, Varietates, et Linguae propinquae

  • Author: Fahim Faisal, Orevaoghene Ahia, Aarohi Srivastava, Kabir Ahuja, etc.
  • Institutiones: Universitas George Mason, University of Washington, University of Notre Dame, RC Athena
  • Paper link: https://arxiv.org/pdf/2403.11009

Ratio iudicandi: Dialectica variatio est phaenomenon in campis NLP et intellegentiae artificialis. Attamen, in prospectu linguae et societatis, eius investigatio maximi pretii est ac momenti applicationes habet. Haec charta perquam novam probationem proposuit ut hanc quaestionem in LLM aetate investigaret.

Paper 3: Habens Beer post orationem?

  • Author: Tarek Naous, Michael J. Ryan, Alan Ritter, Wei Xu
  • Institution: Georgia Institutum Technologiae
  • Paper link: https://arxiv.org/pdf/2305.14456

Ratio iudicandi: Hic articulus ostendit quaestionem magni momenti in LLM era: bias culturales. Haec charta studet culturam Arabicam et ambitum linguae et eventus ostendunt nos oportet considerare differentias culturales cum LLMs designantes. Ideo idem studium in aliis culturae culturae replicari potest ad generaliter ac perpendendum utrum aliae culturae etiam hac problemate afficiantur.