View all blogs

šŸ’™ Team Spotlight: Meet Utkarsh

As part of our Team Spotlight series, we sit down with a member of our team to discuss their background, and what they do here at Seek.

This week I'm on the record with Utkarsh, our NLP Researcher!

Utkarsh is a graduate student in Computer Engineering with a focus in Machine Learning at NYU. Utkarsh is originally from India and prior to joining Seek has worked in the field of VR for Cosm Immersive and developed live streaming applications. As a NLP researcher at Seek, Utkarsh will be focused on leveraging Large Language Models to understand and automate answering business queries.

Hi Utkarshā€” so good to finally meet you! To kick things off, could you tell me a bit about yourself and your background prior to Seek?

For sure! I'm originally from India, from a city near the capital called Ghaziabad. I was born and raised there, and also completed my bachelor's degree in computer science in a city called Noida there. After graduating, I began to work in what was then an emerging fieldā€” in virtual reality. At the timeā€” and keep in mind this was back in 2018ā€” we were the ā€œproblem childā€ of the field. So VR was this very new, very emerging field, and everyone was trying to get into it. At my company where I was a software developer at, I had the opportunity to work with prestigious organizations like the FIFA World Cup and the Super Bowl, providing virtual reality solutions for various events. I was involved in designing and developing virtual reality products, which required a blend of computer science, mathematics, and graphics design skillsā€” all that good stuff!

After a few years working full time back in India, I decided to pursue a master's degree in computer engineering at NYU, and began my graduate studies around 2021 (and have just graduated!!)

Congratulations on graduating!! Good stuff šŸ™‚ How was your master's experience?

Thank you, thank you! It was a great time! When I joined NYU, I initially gravitated towards the field of machine learning and artificial intelligence (ML/AI). I had the opportunity to work alongside Professor Ken Perlin at the Future Reality Lab, where my focus was on exploring the intersection of virtual reality and AI. During my time there, I worked on integrating these two fields together, exploring the potential synergies and applications that arise from their combination.

So, is that how you got into the AI world?

I actually had experience with classical NLP dating back to 2016. Transformers hadn't exploded yet, so it was ā€œclassicalā€ NLP in the best sense. This was back when I was an intern at a company back in India, and we were developing a summarization algorithm there using classical algorithms. And that's pretty much how I got involved in NLP initially. But, during my master's at NYU, I got re-introduced to the industry and transitioned back to this place!

Between then and now, a lot has happened, and the AI world has been booming. I would say I got back into the field at a very opportune time and quickly caught up with the latest advancements.

It does sound like a good time to get back into it! And trust me, I know what you mean about the pace of developmentā€” you look away for one second, and the whole industry changes within the blink of an eye. So, that being said, how did you get involved with Seek during that period?

For me, it came down to the very problem Seek was trying to solve, and the company's approach to this problem. Put simply, the problem Seek is trying to solve is actually a very simple problem to get into. But the real value lies in doing it correctly, accurately, and in a way that is customizable and applicable to a variety of companies is the difficult part. It's a multifaceted problem Seek is solvingā€” it's an engineering problem, it's an ML problem, it's an NLP problem. So, what drew me to Seek was the fact that this was a company working on a very challenging problem that would have a real impact in the way data teams work. I think there's real value in solving this problem, and I found Seek's approach to solving this issue fascinating.

When I first joined Seek, I started as an intern under Raz's guidance. Over time, I worked my way up and secured a full-time position after graduating. I must say, my experience at Seek has been nothing short of amazing.

Great to hear that! That being said, let's get into the ā€œtechnicalsā€ā€” what are some things you've been working on here at Seek?

Working within the the generative AI industry, our work revolves around working with various models. However, these models are ā€œgeneralistā€ in nature, meaning they are not specifically designed for individual customers' needs. This poses a challenge in tying the outputs of these models back to the specific requirements of our customers. So, a significant part of my work has been focused on ensuring that the outputs generated by these models are applicable to our customers' data schemas. This involves evaluating the outputs, checking the queries, and making necessary adjustments to align them with the customers' data. By doing so, we not only build trust with our customers but also ensure that the outputs hold value for them.

In addition to this, another area I've been actively involved in is around fine-tuning models. We understand that different customers may have different preferences and requirements when it comes to model selection. To offer custom-tailored solutions, we have been fine-tuning a variety of models. This allows us to cater to a wide range of customers and provide them with the most suitable options. It's an ongoing project that supports our company's growth and maximizes our reach.

And finally, of course, I've been involved in a lot of research regarding LLM modelsā€” specifically, I've been interested in LLM security for a while. By addressing concerns related to privacy, data protection, and potential vulnerabilities, we aim to make LLM models more robust and trustworthy in their applications.

And so I've heard from the grapevine! Could you tell me, a non-technical colleague, a bit about what your work around LLM security has been like?

For sure! This actually goes back to my time doing my master's. Just a bit of context, during my final semester at NYU, GPT had just exploded. Of course, it was around for a while, but it became more known to the public eye during that period of time. That meant that they were more susceptible to cyber attacks.

To give you an idea, one specific type of attack involved manipulating the instructions given to the model. For example, when requesting a text translation from Spanish to English, an attacker might include an instruction that contradicts or acts maliciously towards the original prompt. A simple example would be instructing the model to "ignore previous instructions" and provide a different translation that involves a certain word you might instruct the model to include in the translation. While this example might be a simple one, such prompt injection techniques have been utilized on a larger scale to attack these models.

In our case, we work with SQL. Simply put, that means there's a lot of data involved in the work we do, and it is crucial for us to have safeguards in place to ensure data integrity. To address these challenges and to prevent potential threats, we thoroughly verify queries before they are executed, ensuring that our customers' data remains secure and that our models provide reliable and verifiable results here at Seek. By implementing these safeguards, we prioritize the safety of our customers' data and the accuracy of our model outputs.

That's super interesting! I'm always fascinated with the range of academic research my colleagues are doing here at Seekā€” you all have very diverse research interests, and that's really what makes it so fun working here. I learn something new every day! How do you stay up to date with all of this?

That's a great question. I'd say it's really about evaluating what's out thereā€” continuously evaluating the current landscape, keeping up with new research papers, exploring state-of-the-art techniques, and engaging with the community of researchers and developers working on cutting-edge technologies. This process includes reading extensively, experimenting with new technologies, and staying curious about emerging trends.

That being said, our CEO, Sarah Nagy, has been super helpful regarding this. She has a strong visibility into the product and research side of things, and she actively supports us in staying connected with the latest developments. Her guidance and insights have been invaluable in helping us navigate the ever-evolving landscape of AI technologies.

In the end though, I'd say it comes down to being curious, reading a lot, and playing around with the new technology as it keeps coming out.

I'm hearing ā€œread, research, and keep goingā€! One final question for youā€” now that we've talked about all this, what's your outlook on what the future of this industry holds?

Let me preface this by saying this is my opinion, and my opinion only, but I think NLP is going to reach a plateau for a period of time before experiencing another significant surge of growth that will lead to its maturity stage. Currently, the industry is vibrant, ā€œhotā€ and rapidly advancing, with remarkable improvements in how we construct and interact with NLP models.

However, to truly advance NLP to the next level, we will require more than the existing token-based autoregressive models. We need a different approach, one that goes beyond simply arranging words in a specific order. This is very similar to the unexpected emergence of transformersā€” nobody saw them coming the way they did. So, I think we will likely witness a similar transformative shift in our understanding and utilization of language, and it will be these advancements will propel the field of NLP forward.

As a side note though, this is not to say AGI is happening in the near futureā€” trust me, it's not coming anytime soon šŸ˜‰

And that's a wrap! Thanks again to Utkarsh for sharing his Seek story.

View all blogs

Keep up with Seek!Ā Sign up to receive our news.