try! Swift NYC is back! This year, the focus of the conference will be on AI + Swift / iOS Development. Topics will include best practices for coding with AI, integrating AI into your app, how to design using AI, ethics + AI, and more! All will be discussed in a one-day conference on September 5th followed by two days of workshops (4 total to attend) where you can get hands-on and experience AI development for yourself! Join the future of Swift development!
Follow us on Twitter at @tryswiftworld for the latest updates and announcements!
We are committed to providing a safe space for all of our attendees, speakers, and volunteers. Our Code of Conduct can be read in full here.
Speakers
Emad Ghorbaninia
Lead iOS Engineer
Emad Ghorbaninia
Emad is a Lead iOS engineer at the Ministry of Taxation in Denmark and Article Author, Tech Editor, Video Instructor at Raywenderlich. When he’s not in front of a computer he is usually playing board games 🎲 and video games 🎮 or watching series 📺. (you won't believe it) sometimes he plays harmonica 🎼.
Vatsal Manot
Swift + AI
Vatsal Manot
Vatsal fell in love with his jailbroken iPod touch 4G back in 2011 as twelve year old, and has been programming on Apple platforms ever since. He’s extreemly passionate about the Swift community and maintains numerous Swift/SwiftUI OSS projects.
His friends would say that he loves building frameworks, almost to a fault - but he’s okay with that :p
At the moment, Vatsal is laser-focused on bridging the gap between Swift developers and generative AI. He's working onbsessively on bringing state-of-the-art LLM tooling to Apple platforms via native Swift libraries, and is excited to see what the community builds with it.
Mohammad Azam
Lead Mobile Developer
Mohammad Azam
Mohammad Azam is a highly experienced and accomplished developer with over a decade of professional experience in writing software. He has played an integral role in the success of several Fortune 500 companies including Valic, AIG, Dell, Baker Hughes, and Blinds.com, where he served as a lead mobile developer.
Azam's expertise has helped him become a top instructor on both Udemy and LinkedIn, with more than 70K students enrolled in his courses. He currently serves as a lead instructor at DigitalCrafts, a software bootcamp where he trains developers who now work at prestigious companies like Apple, JP Morgan Chase, and Exxon.
Azam is not only a developer and instructor but also an international speaker who has been sharing his knowledge and expertise since 2006. In his free time, he enjoys exercising and planning his next adventure to explore the unknown corners of the world.
Stefan Blos
Developer Advocate @ Stream
Stefan Blos
Developer Advocate at Stream, mostly doing iOS. Previously worked as a web, cloud, and mobile developer. M.Sc. in Computer Science with a focus on ML and AI. Master at starting side projects. Likes all kinds of technologies and sports. Legally certified to make Dad jokes.
My passion is building products that users will love and enrich their lives. The tech behind that shouldn't matter too much but with the possibilities that we have we can change people's lives and that is what we should really focus on.
Craig Clayton
Senior iOS engineer @ Ed Farm
Craig Clayton
Craig Clayton is a self-taught, senior iOS engineer, instructor and mentor at Ed Farm, specializing in cultivate change and promote innovation in education. He also volunteered as the organizer of the Suncoast iOS meetup group in the Tampa/St. Petersburg area for three years, preparing presentations and hands-on talks for this group and other groups in the community.
Cristian Díaz
iOS/hardware interconnectivity
Cristian Díaz
I am a software developer based in Berlin with 20+ years of experience in programming. My specialization is in iOS/hardware interconnectivity, and I am currently focused on creating products related to extended reality and accessibility. I have worked in various industries, including automotive hardware, medical products, and environment, which has given me a broad perspective on software development. My professional interests are centered around using technology to make a positive impact on people’s lives, and I am committed to achieving this through my work.
Jonathan Blocksom
Senior Software Developer
Jonathan Blocksom
Jonathan Blocksom is a software developer based in Northern Virginia with an extensive background in 3D graphics, computer vision, and mobile programming.
Tim Oliver
Lead iOS Engineer, Open Source Contributor
Tim Oliver
Tim’s been a fanboy of iOS since the iPhone 3G and a full-time iOS developer since 2013. He currently works as an iOS engineer at Instagram, and before that, at Drivemode, both in Japan. In his free time, he enjoys contributing to the open source iOS community, attempting karaoke and playing video games.
September 5
8:30 am - Registration & Breakfast
9:30 am - Opening Remarks
9:45 - A new bicycle for the mind: AI in big and small tech
A new bicycle for the mind: AI in big and small tech
Steve Jobs once said computers are a bicycle for the human mind. The features and capabilities AI enables for us today is an elevation of that. Coming from working in big tech, and as an indie developer, Tim will introduce Llama 2, a new AI model released by Meta, as well as how he's excited these technologies will accelerate indie iOS app development.
10:15 - Working with language (models) on iOS
Working with language (models) on iOS
ChatGPT, GPT-4, and LLMs are the talk of the town. But aside from racist chatbots, obviously incorrect quotes, and psychedelic images how can we really leverage these language models to improve our apps? Let’s discover APIs that offer services that we can really use in day-to-day development and go down the rabbit hole up until the point of using custom CoreML models and even training them. We’ll start with what Apple has given us with the Natural Language framework and slowly build our way up to the complex large language models (LLMs), such as ChatGPT and Whisper, that we can use for incredibly powerful applications.
There are many different layers to AI and machine learning on mobile and we can leverage the powers of the built-in processors to do powerful on-device machine learning tasks but sometimes it is just necessary to do compute-heavy tasks somewhere else. We'll explore all the possibilities that we have for this!
10:45 - Bringing Existing Apps to visionOS: Adopting Accessible Features for Spatial Computing
Bringing Existing Apps to visionOS: Adopting Accessible Features for Spatial Computing
Step into the future of app development by transforming your current apps to fit seamlessly into visionOS, Apple's incredible spatial computing platform. We will explore how to integrate accessible features into the spatial computing context, and learn practical techniques for creating an inclusive experience that will improve the quality of life of your users. Don't miss out on the opportunity to leverage the power of visionOS while ensuring the accessibility of your apps.
11:15 am - Break
11:30 - Code That Teaches Itself: A Superior Approach to iOS Development using GPT-Powered Coding
Code That Teaches Itself: A Superior Approach to iOS Development using GPT-Powered Coding
I'll show you how GPT-Powered Coding is revolutionizing iOS development. Instead of following conventional solutions from Stack Overflow, you'll learn how to use GPT to generate high-quality, unique code that perfectly fits your app's needs. I'll walk you through the benefits of GPT, how it can improve your code quality and reduce development time, and showcase real-world examples of GPT-Powered Coding. You'll also gain practical guidance and tips for integrating GPT into your workflow.
12:00 - CreateML for iOS Developers
CreateML for iOS Developers
You'll learn machine learning concepts and how to use CreateML to train models for image, text, and data classification. You'll learn to train models to distinguish cats and dogs, classify text and images, and analyze tabular data.
12:30 - 3D Assets for Spatial Computing
3D Assets for Spatial Computing
Creating and managing 3D content is a new challenge for iOS developers seeking to create apps for the Vision Pro, but there are lots of new and old tools we can use to help us. We will start with learning how to build and edit virtual assets in Reality Composer and Xcode. Then we’ll see how to scan objects and rooms using RealityKit SDKs and our mobile iOS devices. Finally we will take a look at the overall USD Ecosystem and what other tools and resources there are to help us build great new spatial computing experiences!
1:15 pm - Lunch
2:30 - How I used Siri, PaLM, LangChain, and Firebase to create an Exobrain
How I used Siri, PaLM, LangChain, and Firebase to create an Exobrain
In our fast paced world, there is just too much information, and it often seems impossible to keep up with everything that’s going on. If you ever felt that you couldn’t possibly remember everything you saw, read, or even didn’t read, come to this talk and I will show you how I build an app that allows me to do just that.
I will show you how I
- used SwiftUI to build a beautiful app that works across Apple’s platforms
- used Cloud Firestore to store gigabytes of data, keeping it in sync across all of my devices
- used the PaLM API to summarise articles, and ask my app questions about articles
- used LangChain to connect PaLM to my personal data store
- Use Siri to provide a natural language interface that allows me to query my knowledge base hands-free
3:00 - visionOS: A 20 min Quick Guide
visionOS: A 20 min Quick Guide
In a targeted 20-minute presentation, I'll equip experienced SwiftUI developers with the five critical elements necessary for successfully navigating the Vision OS landscape. This isn't just a beginner's guide; it's a roadmap tailored for professionals who are about to undertake their first project in Vision OS. By the end of this talk, you'll have a strong foundation and the essential know-how to set you on the right course for effective and innovative Vision OS development.
3:30 - Talking to your data with Swift
Talking to your data with Swift
You'll learn how to connect LLMs to an external data source to power your very own chatbot.
We'll cover:
- The fundamentals of retrieval augmented generation with the LLM APIs available today — the main architecture for the chatbot.
- A conceptual understanding of text embeddings and vector databases — that (alongisde LLMs) serve as the key infrastructure for the chatbot.
- An understanding of the limitations and safety of these techniques, and how to think about integrating them into your own iOS apps
4:00 pm - Hour
5:00 pm - Closing / Announcements
September 6
9:00 am - Workshop 1 Start
12:00 pm - Workshop 1 End
2:00 pm - Workshop 2 Start
5:00 pm - Workshop 2 End
6:30 pm - Peloton Rooftop Happy Hour
September 7
9:00 am - Workshop 3 Start
12:00 pm - Workshop 3 End
2:00 pm - Workshop 4 Start
5:00 pm - Workshop 4 End
Workshops
Stefan Blos
Developer Advocate at Stream, mostly doing iOS. Previously worked as a web, cloud, and mobile developer. M.Sc. in Computer Science with a focus on ML and AI. Master at starting side projects. Likes all kinds of technologies and sports. Legally certified to make Dad jokes.
My passion is building products that users will love and enrich their lives. The tech behind that shouldn't matter too much but with the possibilities that we have we can change people's lives and that is what we should really focus on.
Working with language (models) on iOS
Stefan Blos
ChatGPT, GPT-4, and LLMs are the talk of the town. But aside from racist chatbots, obviously incorrect quotes, and psychedelic images how can we really leverage these language models to improve our apps? Let’s discover APIs that offer services that we can really use in day-to-day development and go down the rabbit hole up until the point of using custom CoreML models and even training them. We’ll start with what Apple has given us with the Natural Language framework and slowly build our way up to the complex large language models (LLMs), such as ChatGPT and Whisper, that we can use for incredibly powerful applications.
There are many different layers to AI and machine learning on mobile and we can leverage the powers of the built-in processors to do powerful on-device machine learning tasks but sometimes it is just necessary to do compute-heavy tasks somewhere else. We'll explore all the possibilities that we have for this!
Vatsal Manot
Vatsal fell in love with his jailbroken iPod touch 4G back in 2011 as twelve year old, and has been programming on Apple platforms ever since. He’s extreemly passionate about the Swift community and maintains numerous Swift/SwiftUI OSS projects.
His friends would say that he loves building frameworks, almost to a fault - but he’s okay with that :p
At the moment, Vatsal is laser-focused on bridging the gap between Swift developers and generative AI. He's working onbsessively on bringing state-of-the-art LLM tooling to Apple platforms via native Swift libraries, and is excited to see what the community builds with it.
Talking to your data with Swift
Vatsal Manot
In this workshop, you'll learn how to connect LLMs to an external data source to power your very own chatbot.
We'll cover:
- The fundamentals of retrieval augmented generation with the LLM APIs available today — the main architecture for the chatbot.
- A conceptual understanding of text embeddings and vector databases — that (alongisde LLMs) serve as the key infrastructure for the chatbot.
- An understanding of the limitations and safety of these techniques, and how to think about integrating them into your own iOS apps
Emad Ghorbaninia
Emad is a Lead iOS engineer at the Ministry of Taxation in Denmark and Article Author, Tech Editor, Video Instructor at Raywenderlich. When he’s not in front of a computer he is usually playing board games 🎲 and video games 🎮 or watching series 📺. (you won't believe it) sometimes he plays harmonica 🎼.
Code That Teaches Itself: A Superior Approach to iOS Development using GPT-Powered Coding
Emad Ghorbaninia
In this workshop, I'll show you how GPT-Powered Coding is revolutionizing iOS development. Instead of following conventional solutions from Stack Overflow, you'll learn how to use GPT to generate high-quality, unique code that perfectly fits your app's needs. I'll walk you through the benefits of GPT, how it can improve your code quality and reduce development time, and showcase real-world examples of GPT-Powered Coding. You'll also gain practical guidance and tips for integrating GPT into your workflow.
Mohammad Azam
Mohammad Azam is a highly experienced and accomplished developer with over a decade of professional experience in writing software. He has played an integral role in the success of several Fortune 500 companies including Valic, AIG, Dell, Baker Hughes, and Blinds.com, where he served as a lead mobile developer.
Azam's expertise has helped him become a top instructor on both Udemy and LinkedIn, with more than 70K students enrolled in his courses. He currently serves as a lead instructor at DigitalCrafts, a software bootcamp where he trains developers who now work at prestigious companies like Apple, JP Morgan Chase, and Exxon.
Azam is not only a developer and instructor but also an international speaker who has been sharing his knowledge and expertise since 2006. In his free time, he enjoys exercising and planning his next adventure to explore the unknown corners of the world.
CreateML for iOS Developers
Mohammad Azam
In this workshop, you'll learn machine learning concepts and how to use CreateML to train models for image, text, and data classification. Through hands-on activities, you'll learn to train models to distinguish cats and dogs, classify text and images, and analyze tabular data. By the end of the workshop, you'll have the skills to integrate machine learning models into your iOS apps and have the chance to engage in discussions with the instructor and attendees.
Craig Clayton
Craig Clayton is a self-taught, senior iOS engineer, instructor and mentor at Ed Farm, specializing in cultivate change and promote innovation in education. He also volunteered as the organizer of the Suncoast iOS meetup group in the Tampa/St. Petersburg area for three years, preparing presentations and hands-on talks for this group and other groups in the community.
The Next Dimension: Constructing the Apple Watch Store on visionOS
Craig Clayton
In our hands-on workshop, we'll dive deep into the development for visionOS, exploring its unique capabilities and interfaces. Participants will be guided through the process of creating an Apple Watch store app tailored for this revolutionary OS. We'll focus on crafting an intuitive user interface and seamlessly integrating immersive 3D models to enhance the app's user experience. Join us to unlock the potential of spatial computing and bring your innovative app ideas to life.
Cristian Díaz
I am a software developer based in Berlin with 20+ years of experience in programming. My specialization is in iOS/hardware interconnectivity, and I am currently focused on creating products related to extended reality and accessibility. I have worked in various industries, including automotive hardware, medical products, and environment, which has given me a broad perspective on software development. My professional interests are centered around using technology to make a positive impact on people’s lives, and I am committed to achieving this through my work.
Bringing Existing Apps to visionOS: Adopting Accessible Features for Spatial Computing
Cristian Díaz
Step into the future of app development with our exciting workshop on transforming your current apps to fit seamlessly into visionOS, Apple's incredible spatial computing platform. Join us as we explore how to integrate accessible features into the spatial computing context, and learn practical techniques for creating an inclusive experience that will improve the quality of life of your users. Don't miss out on the opportunity to leverage the power of visionOS while ensuring the accessibility of your apps.
Jonathan Blocksom
Jonathan Blocksom is a software developer based in Northern Virginia with an extensive background in 3D graphics, computer vision, and mobile programming.
3D Assets for Spatial Computing
Jonathan Blocksom
Creating and managing 3D content is a new challenge for iOS developers seeking to create apps for the Vision Pro, but there are lots of new and old tools we can use to help us. We will start with learning how to build and edit virtual assets in Reality Composer and Xcode. Then we’ll see how to scan objects and rooms using RealityKit SDKs and our mobile iOS devices. Finally we will take a look at the overall USD Ecosystem and what other tools and resources there are to help us build great new spatial computing experiences!
Workshops are free for all try! Swift NYC ticket holders. Workshops will be conducted by the speakers and take place in various offices around NYC. Those who purchased a ticket will receive an Eventbrite email with further instructions on how to select a workshop in early-August.
You Are Blocking Our Sponsors
We noticed that you are running ad blocking software. While we cannot hack into your computer and prevent you from doing so, we also cannot run our event without the support of our sponsors.
Please consider turning off your ad block software for this website. Thanks.
Interested in sponsoring or want more information? Send us an email at [email protected].
Testimonials
Meet the Hosts
Natasha Murashev
Founder of try! Swift
Natasha Murashev
Natasha is an iOS developer by day and a robot by night. She organizes the try! Swift Conference around the world (including this one!). She's currently living the digital nomad life as her alter identity: @NatashaTheNomad.
Nathaniel Segal
Magician
Nathaniel Segal
Nathaniel Segal is a designer and magician with a degree in applied mathematics and theater from UC Berkeley. He is a member of the World Famous Magic Castle and has appeared on the show Penn & Teller: Fool Us. His work has been published in Math Horizons and he has given multiple talks at the biennial Gathering 4 Gardner conference. Nathaniel currently is performing his newest magic production Revelations.
Tim Oliver
Lead iOS Engineer, Open Source Contributor
Tim Oliver
Tim’s been a fanboy of iOS since the iPhone 3G and a full-time iOS developer since 2013. He currently works as an iOS engineer at Instagram, and before that, at Drivemode, both in Japan. In his free time, he enjoys contributing to the open source iOS community, attempting karaoke and playing video games.
You Are Blocking Our Sponsors
We noticed that you are running ad blocking software. While we cannot hack into your computer and prevent you from doing so, we also cannot run our event without the support of our sponsors.
Please consider turning off your ad block software for this website. Thanks.