# Taipei | Claude and AI Ethics
**Date de l'événement :** 02/05/2026
* Publié le 02/05/2026

### Date
02/05/2026

### Galerie d'image
![1.png](https://firebasestorage.googleapis.com/v0/b/memory-ai.appspot.com/o/prod%2FrKxsdSTpqCfzIFY8Y2hg%2FprojectsMedias%2Fy818FZS7jxdVqVK83Lrk%2Fthumbs%2F1_1600x900.png?alt=media&token=d2f065ff-d3e9-4fad-ab08-bf0b9bbea236) 

### Ville
`#Taipei City` 

## Description
  
Join a Claude Community Event, organized by local enthusiasts for everyone who loves building with Claude Code!  
This time, we’re doing something a little different. We’re gathering a group of interested participants who want to discuss a serious but important topic: ethics surrounding AI and the future of technology.  
When AI has infiltrated into war and geopolitics conflict, what does it actually mean to build AI responsibly and who gets to decide? This meetup takes that question seriously through two lenses.  
The first is Anthropic's ongoing federal lawsuit against the U.S. Department of War. Anthropic is once the main AI model provider for Department of War. However, during the recent geopolitical crisis, when the Pentagon demanded Anthropic remove its restrictions on using Claude for autonomous lethal warfare and mass domestic surveillance, then designated Anthropic a "supply chain risk to national security" after it refused. The court filings — including supporting briefs from Microsoft, and from individual engineers at OpenAI and Google — reveal how AI safety principles collide with state power.  
The second lens is Anthropic's study of 81,000 people across 159 countries on what they actually want from AI. The research project is led by a researcher participating in Collective Intelligence Project (CIP). The research focus on understanding the viewpoints from AI users about AI governance. However, the scale and representativeness also raise concerns.  
Both readings happen to come from Anthropic. Put side by side, they raise something more interesting: if the lawsuit is about who gets to set limits on AI, the 81,000-person study is a reminder that those limits affect real people with real and conflicting feelings about it.  
When a model used by hundreds of millions of people normalizes certain ways of thinking about what's safe or ethical, what kind of accountability does that demand? The institutions debating AI policy and the people living with it daily are often not in the same conversation, and maybe they should be.  
Due to the theme of this meetup, we are keeping the group smaller intentionally so we can have deeper discussions. If you’re passionate about this topic, please let us know why when you RSVP for the event. We welcome all backgrounds and diverse thoughts.  
Before the meetup, please take a look at the relevant docs to participate more fully in the discussion:  
Anthropic’s ongoing federal lawsuit against the U.S. Department of War (link)  
Supporting briefs from Microsoft (link) and individual engineers at OpenAI and Google (link)  
Anthropic’s study of 81,000 people (link)  
Supplementary reading of Initium Media article: Anthropic 與國防部之爭：誰有權決定AI的邊界？(link)  
We also invite speakers with diversified backgrounds but with common interests and proficiency on AI governance.  
Speakers:  
Moderator & Discussant — Peter Cui: Graduate student at National Taiwan University College of Law, legal researcher, and contributor to the vTaiwan community. Focuses on AI governance and regulation. ISF Global Fellow 2025.  
Defense Industry Panelist — Anshuman Prasad: Formerly spent 8 years at Palantir Technologies, working on multiple projects including developing edge AI capabilities for Project Maven, as well as applications in vaccine distribution and aerospace. Currently leads the engineering team at Valinor.  
Tech and Governance Panelist — Nicole Chan: Former Chairperson of Taiwan’s National Communications Commission. Currently an industry advisor and practicing attorney, ICANN ASO AC, Chairperson of the Artificial Intelligence Foundation and Digital Trust Association in Taiwan.  
Other details:  
Lunch: We're joining a speaker dialing in from overseas so we will start our meetup at 9 AM. Since the discussions will go through lunch, we will order bento meals for every participant. Please let us know if you’d like us to order you a lunch when you RSVP for the event. It’s on us!  
Language: English and Mandarin will be used interchangeably. Use the language you feel most comfortable speaking in.  
Thanks to all those who made this possible:  
Venue: g0v community space  
Planning: Web3 For All community  
Support: Claude community team  
If you’re interested but can’t make it, we will also be livestreaming during the event, and we'll share key takeaways with the wider community after the event. Please note that if we accept your spot and you can no longer attend, please let us know in advance so we can offer your spot to someone else. No-shows who don't communicate with us will be deprioritized for future events.  
All views expressed at this meetup are those of the individual speakers and participants. We do not claim to represent Anthropic in any way.  
–

**Lien de l'évènement :** [https://luma.com/event/evt-qsYYCnqPZ8HTlWF](https://luma.com/event/evt-qsYYCnqPZ8HTlWF)

### Pays
`#Taiwan` 

### Continent
`#Asia` 

**Médias associés :**
[Média 1](https://80954c1d.sibforms.com/serve/MUIFABojU8UBbDiX_TdcGa7Wv5VMoVB_nBZ92mkLkGlS1pJLpP7s-pVJusyN-7cG9KPrSuv3fv7TmXwuw_AoyNUShR8jZhmNDgUbZPJO2V5xYXlNz4YXOTjSb8X7Lj7PRIPzgzEWlLbA4f4uw_F8RM51EUsjSfQQko0qaby98GHMdYJVWLIXd5JzzaXBGmqN2CcYOFuqnbnaYEnw) 

## event_id
evt-qsYYCnqPZ8HTlWF@events.lu.ma

### Outils
`#Claude` `#Anthropic` `#OpenAI` `#Google` 



---
### Navigation pour IA
- [Index de tous les contenus](https://ai-memory.io/llms.txt)
- [Plan du site (Sitemap)](https://ai-memory.io/sitemap.xml)
- [Retour à l'accueil](https://ai-memory.io/)
