3. κ·€κ°€ νŠΈμ΄λŠ” μ˜μ–΄

24.08.19 OpenAI Employees Warn of AI Risks and Lack of Oversight

ν•˜λŠ˜μ„ λ‚˜λŠ” λ‹₯ν„° 고아원 짱 2024. 10. 9. 23:40

Current and former OpenAI employees published an open letter expressing unease about the rapid advancement of AI without sufficient oversight and whistleblower protections.

μ „ν˜„μ§ OpenAI 직원듀이 관리 감독 및 λ‚΄λΆ€ 고발자 λ³΄ν˜Έκ°€ λΆ€μ‘±ν•œ μƒν™©μ—μ„œ κΈ‰μ†λ„λ‘œ λ°œμ „ν•˜λŠ” AI에 λŒ€ν•΄ 우렀λ₯Ό ν‘œλͺ…ν•˜λŠ” λ‚΄μš©μ˜ 곡개 μ„œν•œμ„ λ°œν‘œν–ˆλ‹€.

* unease : λΆˆμ•ˆ, κ±±μ •

* sufficient : μΆ©λΆ„ν•œ

* oversight : κ°μ‹œ, 감독

* whistleblower : (비리 λ“±μ˜) λ‚΄λΆ€κ³ λ°œμž

* protection : 보호

​

They pointed out the strong financial incentives AI companies have to avoid effective regulation.

이듀은 AI 기업듀이 효과적인 규제λ₯Ό ν”Όν•˜λ €λŠ” κ°•ν•œ 경제적 μΈμ„Όν‹°λΈŒλ₯Ό 가지고 μžˆλ‹€κ³  μ§€μ ν–ˆλ‹€.

* point something out : ⋯을 μ§€μ ν•˜λ‹€

* regulation : κ·œμ •, λ²•κ·œ

​

The letter also stressed the importance of sharing information with governments and civil society.

β€‹λ˜ν•œ μ„œν•œμ€ μ •λΆ€ 및 μ‹œλ―Ό μ‚¬νšŒμ™€ 정보λ₯Ό κ³΅μœ ν•˜λŠ” 것이 μ€‘μš”ν•˜λ‹€κ³  κ°•μ‘°ν–ˆλ‹€.

* civil : <λΆ„μŸβˆ™λŒ€λ¦½ 등이> μ‹œλ―Όμ˜, κ΅­λ―Όλ“€ κ°„μ˜β€‹

​

The group called for AI companies to commit to transparency and support a culture of open criticism.

직원 λ‹¨μ²΄λŠ” AI 기업듀이 투λͺ…성에 λŒ€ν•œ μ˜μ§€λ₯Ό 보이고 개방적인 λΉ„νŒ λ¬Έν™”λ₯Ό 지원할 것을 μš”κ΅¬ν–ˆλ‹€.

* call for something : ⋯을 κ°•λ ₯히 μš”κ΅¬ν•˜λ‹€[ν˜Έμ†Œν•˜λ‹€]

* commit to something : ⋯을 ν™•μ‹€νžˆ μ•½μ†ν•˜λ‹€

* transparency : (κΈ°κ΄€βˆ™μ‘°μ§ λ“±μ˜) 투λͺ…μ„±

* criticism : λΉ„νŒ, λΉ„λ‚œ

​

It also pushed to establish processes for employees to report concerns anonymously.

λ˜ν•œ 직원듀이 우렀 사항을 읡λͺ…μœΌλ‘œ 보고할 수 μžˆλŠ” 절차 μˆ˜λ¦½μ„ μ΄‰κ΅¬ν–ˆλ‹€.

* push to do something : β‹―ν•˜λŠ” 것을 밀어뢙이닀

* establish : <κ΄€κ³„βˆ™μœ λŒ€ λ“±>을 ν˜•μ„±ν•˜λ‹€, κ΅¬μΆ•ν•˜λ‹€

* anonymously : 이름을 μ•Œλ¦¬μ§€ μ•Šκ³ , 읡λͺ…μœΌλ‘œ

​

The employees emphasized that without effective government oversight, internal whistleblower protections are inadequate as many AI-related risks are not yet regulated.

​직원듀은 λ§Žμ€ AI κ΄€λ ¨ μœ„ν—˜ μš”μ†Œκ°€ 아직 κ·œμ œλ˜μ§€ μ•ŠκΈ° λ•Œλ¬Έμ— μ •λΆ€μ˜ 효과적인 관리 감독 μ—†μ΄λŠ” λ‚΄λΆ€ 고발자 λ³΄ν˜Έκ°€ λΆˆμΆ©λΆ„ν•˜λ‹€κ³  κ°•μ‘°ν–ˆλ‹€.

* internal : (쑰직) λ‚΄λΆ€μ˜, 내뢀적인

* inadequate : λΆˆμΆ©λΆ„ν•œ, λΆ€μ λ‹Ήν•œ

* regulate : (κ·œμΉ™βˆ™λ²•λ₯  λ“±μœΌλ‘œ) ⋯을 κ·œμ œν•˜λ‹€, κ°λ…ν•˜λ‹€

​

They urged AI companies to refrain from enforcing non-disparagement agreements and to allow employees to voice concerns freely.

직원듀은 AI 기업듀이 λΉ„νŒμ„ κΈˆμ§€ν•˜λŠ” 계약을 κ°•μš”ν•˜μ§€ 말고 직원듀이 자유둭게 우렀λ₯Ό ν‘œλͺ…ν•  수 μžˆλ„λ‘ ν—ˆμš©ν•  것을 μ΄‰κ΅¬ν–ˆλ‹€.

* urge somebody to do something : β‹―μ—κ²Œ β‹―ν•˜λΌκ³  κΆŒκ³ ν•˜λ‹€

* refrain from (doing) something : β‹―(ν•˜λŠ” 것)을 μ‚Όκ°€λ‹€, μžμ œν•˜λ‹€

* enforce : <λ³΅μ’…βˆ™ν–‰λ™ λ“±>을 κ°•μš”ν•˜λ‹€

​

The letter was signed by several current and former OpenAI employees, along with renowned computer scientists, demonstrating the industry's broad support.

이 μ„œν•œμ—λŠ” μ—¬λŸ¬ μ „ν˜„μ§ OpenAI 직원듀과 μ €λͺ…ν•œ 컴퓨터 κ³Όν•™μžλ“€μ΄ 쑰인해 μ—…κ³„μ˜ 폭넓은 지지λ₯Ό μž…μ¦ν–ˆλ‹€.

* renowned : <μ‚¬λžŒβˆ™μž₯μ†Œ 등이> 유λͺ…ν•œ, λͺ…성이 μžμžν•œ

* demonstrate : <μ—°κ΅¬βˆ™μ‹€ν—˜ 등이> ⋯을 μž…μ¦ν•˜λ‹€, λΆ„λͺ…νžˆ 보여주닀

* broad : <λ²”μœ„βˆ™λΆ„μ•Ό 등이> 폭넓은, κ΄‘λ²”μœ„ν•œ

​

The letter follows recent controversies at OpenAI, including the disbanding of its safety team.

β€‹μ΅œκ·Ό OpenAI의 μ•ˆμ „ νŒ€ 해체λ₯Ό λΉ„λ‘―ν•œ λ…Όλž€μ΄ 일자 이번 μ„œν•œμ΄ λ°œν‘œλ˜μ—ˆλ‹€.

* follow : <μ‚¬κ±΄βˆ™ν–‰μœ„ 등이> λ’€λ”°λ₯΄λ‹€, 뒀이어 μΌμ–΄λ‚˜λ‹€

* controversy : λ…Όλž€, λ…ΌμŸ

* disband : <κ΅°λŒ€βˆ™μ‘°μ§ 등이> ν•΄μ²΄λ˜λ‹€, ν•΄μ‚°λ˜λ‹€

 

 

μ˜μ–΄λ₯Ό λ”~μž˜ν•˜λ €λ©΄ μ±…을 κ΅¬λ§€ν•΄μ„œ μ¦κ²κ²Œ~μ”¬λ‚˜κ²Œ~ν•˜μ„Έμš”~
술술술 μ˜μ–΄λ§~잘~ν•˜μ‹œλŠ” μ²œμ‚¬λ‹˜πŸ˜‡λ“€ λ•λΆ„에 ν–‰λ³΅ν•©λ‹ˆλ‹€~~
10μ›” κ·€νŠΈμ˜ ꡐ재 ꡬ맀처~~~πŸ’›πŸ§‘πŸ’›~~~

μœ„ λ§ν¬λ‘œ κ΅¬λ§€μ‹œ μΏ νŒ‘νŒŒνŠΈλ„ˆμŠ€μ˜ μΌν™˜μœΌλ‘œ μ΄μ— λ”°λ₯Έ μΌμ •μ•‘의 μˆ˜μˆ˜λ£Œλ₯Ό μ œκ³΅λ°›μŠ΅λ‹ˆλ‹€~~~
μ²œμ‚¬πŸ˜‡λ‹˜λ“€ λ•λΆ„에 μ΄λ£¨μ–΄μ§„ μˆ˜μ΅ λ„ˆλ¬΄λ‚˜ κ°μ‚¬ν•©λ‹ˆλ‹€~~ 
덕뢄에 μ»΄νŒ¨μ…˜ μ•„이듀을 λ„μšΈμˆ˜ μžˆμ–΄μ„œ ν–‰λ³΅ν•©λ‹ˆλ‹€~πŸ’›πŸ§‘πŸ’›