Decision Stories, Misc

Find your perfect house

Reading Time: < 1 minute

I have made several previous attempts to make an app that help people to find their perfect house. In fact, my first data science project was to find a home that fit people’s needs. My first attempt was to find an apartment in New York City. There were so many people doing that. My top worry was about crime. As I moved out of NYC and into the suburbs of other states, school district became a more important factor. Doesn’t matter how I approach the problem, nothing beats seeing the pictures and actually visit the house. So here is my latest attempt to do that. Just a few simple filters to knock the choices down to a few. If you happen to use the app, drop a comment on how it worked for you.

Here is the link: http://house-panzoto.ddnsfree.com/

Misc

Cry early, or cry later

Reading Time: < 1 minute

Recently, I’m starting to send my son to day care. Inevitably, it takes a while for him to adjust and he cries when leaving the house. Just like my older son, he is smart enough to know that if he leave home at that time and with me, he is going to daycare.

And then I started to ask a question. Would he cry as much if he was sleep trained? In another words, is the total amount of crying the same for a child? Would being sleep trained harden him, and make this transition easier? I don’t know the answer to this question. And I cannot test it either since I can’t go back in time. Even if I can go back in time, my wife would never let me know sleep train because she needs to hear the crying. We are only okay with this because he will be in the car with me during drop off, and I’m the only one that will here him cry.

On a happier thought, I think it give a bonding moment because I have to go through this life changing event with him. And from what I heard, people bond during tough experiences, e.g. school, military, etc. I went through this with my older son. He wasn’t attached to me much when he was little, but as an older child, he is more attached to me than his mother. If this continue to hold, there is at least a purpose for me to go through this event with my younger son. Because I know we will bond over this. My purpose as a parent in life, is to make sure he can be on his own when he grows up. And I believe I’m taking the first step towards that.

Misc

Executive order regarding AI may be actually blocking competition

Reading Time: 2 minutes

The Biden administration just released an executive order on October 30th, 2023 regarding the governance of artificial intelligence. I applaud the effort the white house is making towards regulating the AI industry and protecting the general public regarding privacy and and safety concerns. These efforts focuses on national security issues regarding biological, chemical, and nuclear weapons, specifically associated with biotechnology and cybersecurity. It calls on the respective federal agencies to draft plans in the next 90-270 days to regulate companies who are involved in the AI industry.

There will be a lot of good things coming out of this executive order. But as human nature dictates, it’s just easier to criticize than to praise the positives. So here is what I think that needs improvement:

  1. My major concern is that this document will prevent the entry of small companies to the large language model (LLM) market. Early in the document, one of the principle for the guideline was to promote competition, especially from the smaller companies. But later in the document, it calls on the agencies to draft guidelines that require companies to report the development LLMs over 10 billion parameters. The intention is to keep track of the models that can be used for many purposes, including those that pose national security risks. However this requirement inadvertently impede the development of models by small companies because they wouldn’t have the ability to generate the documentation required by these agencies. This 10 billion parameter limit means the latest llama open source models is included in the reporting requirements, including companies that simply possessing these models. This mean the making of large models are left in the hands of the tech companies like Google, Facebook, Microsoft, etc. This is not what the general public like to see.
  2. Another thing I see is that some of the guidelines and policies are difficult to enforce. For examples, one of the principles is reducing the effect of AI on general workforce. It is a well intended principle. But in the practice, profit seeking nature of companies means that they will use the power of AI to reduce workforce spending. Even though many companies talked about “augmentation” of workers, it’s a fact that the worker count have been reduced because of AI implementations. The reality is that for people who cannot or unwilling to learn about AI technologies, they will be impacted by workforce reduction in the future. And this future could be sooner than people think.
  3. It’s late. Regarding privacy laws, we are behind Europe on GDPR requirements. Some states have their privacy laws, such as California, but we still don’t have unified national requirements. For comparison, China started to mandate online commerce companies to stop using consumer data to track and target consumers with the sole purpose of maximizing profit. One example it to prevent the practice of giving new customers better deals while “ignoring” old customers. Or selectively give people worse deals because the companies knows that they will purchase such items regardless.
Misc

A personal take on the latest AI job market

Reading Time: 4 minutes

Over a month ago, I was back on the job market. I connected with my friends and here are the opportunities I have found and a few tips on what the market is looking for.

  1. Products that use search. I met a lot of knowledgable individual in this area and learned a great deal along the way. Needs in this area revolves around using traditional search algorithms and mix in neural network approaches as well. I was surprised to find that keyword-based algorithms like BM25 is still one of the widely used methods. BM25 is similar to TFIDF, but rather than just count the rare words, it is also divide the frequency by the length of the text. So the longer text were not given favor just because there are more words in it. Even though they don’t have semantic meaning, this algorithm is still among the top performers of BEIR scores. I was surprise to find this since I have used similar algorithms in topic modeling, but found the results to be much inferior to using sentence embedding. The group I talked to in this area is more concerned about implementation of personalization of the search result. Many of techniques require “augmentation” of the search query. For examples, using intent classification on the search query to generalize it’s purpose. Since many search query are short phrases that barely have context, the algorithm essentially need to guess what people are asking for based on the personal history, location, and what other users found to be useful. Other preprocessing including segmentation of the search context, so larger groups are identified. The groups are basic units to be search rather than individual documents.
  2. Fine-tuning of large language models. Since GPT4 and large language models is the current and hot topic, many companies are looking to take advantage and ride the wave. It’s not always the best solution for their business case, but companies do occasionally find it useful. More often people use out-of-the-box LLMs to gain knowledge to their documentation. For example, if we want to able to search through written documents and find relevant answers, we can simply put that large document in chunks through the GPT4 model and ask for an answer. Or we can use LLM to build embeddings and search query against the embedding using cosine similarity and rank the results. Up until recently, GPT4 models have no access to internet data to fact check. Even if they do, they might not know business specific knowledge. So there are packages help user search through those information as well. Algorithms like Retrieval-Augmented Generation and Low Rank Adaptation are particularly useful. A major advantage of this approach is that they don’t require a lot of data to fine tune the large language model.
  3. Causal inference modeling. This type of analysis is often for industries that often need to figure out the root cause. One example could be the insurance industry. For example, if I want to figure out the important factors that influence the price of auto insurance, I can simply run a general linear model with feature like age, marital status, etc. to predict the insurance premium. And use models with feature importance to figure out which feature influence the premium the most. But this often does not solve the business problem since the executives will question why they think a particular feature is important. So the data scientist not only need to figure out the what the factors are, they also need to find out how. One way people do this is by constructing equations they think how these features influence the premium. This is often based on their industry experience and intuitive sense. I cannot say I understand this part, so it seems that many years of experience is need.
  4. Company-wise tool development. In larger corporations, there are research teams that build machine learning tools aimed to help the whole company. Since ML talents are rare for any company, they are looking to maximize the value. Instead of targeting specific use cases, they are asked to develop generic tools. This is actually the most common theme I have seen for the jobs I applied to. The needs range from fresh new departments to seasoned corporations with years of ML research experience. The only difference is that newer teams focus more on tool building, whereas mature teams thing more about large scale production and data normalization.
  5. Specific product development. As apposed to company-wise tool development, many company have specific productization goals. They may or may not include ML specific component, but I tried for these positions anyways. Most of them are not the best fit for me. Some are because of ethical reasons, and some simply because I’m not that good of a programmer and they don’t need ML knowledge at all. All and all, I’m glad I tried just to see where I’m and what I don’t want to do.

At the end, my decisions is to go with a position that’s most comfortable for family reasons. That same role also have enough ML needs and I can use my past experience and explore new things at the same time. To sum up the learnings for this experience, is to get as many options as possible, and really figure out what I wanted to do. Don’t shy away from tough interview questions because sometimes they help you figure out life purpose as well. And for offer negotiations, having multiple offers make it a lot easier to play the “don’t tell your salary expectation” game. It helped me immensely since I don’t have natural game to get the max offer I can get.

Misc

Decision Stories – Making Executive Decisions

Reading Time: 3 minutes

Expanding on using the Decision Story tool to help people make decisions. I want to explore the decision making process of business executives, and how this tool could help that target audience. By talking to people who are in the executive role and interacted with CEOs, I found one popular way for which CEO uses to make decisions. For a decision that may change the direction of the company, CEOs would like to ask the alternatives, and how these alternative changes the financial bottom line, as well as departments of the company. They want to know what would change if they make the decisions in as many aspect of the company as possible. So they could evaluate if those changes is good for the company overall.

So I though, what if the decision making tool could incorporate those aspect and ask the large language models (LLM) to evaluate it through the lens of an Econ. An Econ is often referred to in the economy academics as someone who always make the most logical decisions with the lack of emotion and bias. I have tried to use this approach and got some reasonable answers from the LLM. And I have found the answer covered some aspect of the decision that I haven’t realized. There are definitely CEOs who are smarter than me, but it probably doesn’t hurt for people to know that they already covered the basics. Here are some of aspected worth mentioning when asking the LLM for alternatives.

Here is an example in which I asked the tool to help me make a decision to whether to use a in-house model, 3rd Party model, or a combination approach to improve an existing product.
Here is what I put in:

“I’m an CEO of a mid size company with 300+ employees. We are working on speech analytics and helping customer to understand call center conversations. Large language models are getting big. I have some decisions about which model to use. We developed some simple binary classification model in house. We can use Openai 3rd party models. Or we can use our existing rule based solution. Or we can use a combination of some of the above. Please provide the pros and cons about each and provide other alternatives for long term growth. Specifically aim at IPO in next few years.”

And here is things I’m putting into the models.

1. Objective Definition:

Define the goal you’re trying to achieve. In your case, it might be “select the best AI approach to enhance our speech analytics service.”

2. Information Gathering:

Gather as much information as possible relevant to your decision. This includes:

  • Business Requirements: Specific needs of your business and customers.
  • Technical Requirements: Scalability, data volumes, security, etc.
  • Market Trends: What your competitors are doing, trends in AI technology.
  • Legal and Regulatory Requirements: Compliance to data privacy laws, etc.
  • Financials: Budgets, potential ROI, etc.

3. Decision Criteria:

Establish the criteria you will use to evaluate your options. This could include factors like cost, time to implement, ease of use, scalability, potential return on investment, etc.

4. Generate Alternatives:

Generate a list of potential options. In your case, these are the AI technologies you’re considering: In-house model, third-party model, rule-based solution, or a hybrid.

5. Evaluate Alternatives:

Use your decision criteria to evaluate each alternative. This may involve financial modeling, technical assessments, consulting with experts, and other forms of analysis.

6. Select the Best Alternative:

Based on your evaluations, select the option that best meets your criteria and aligns with your business objectives.

7. Action Plan:

Develop an action plan for implementing your decision. This should include key milestones, resources required, risks and mitigation plans, and a timeline.

8. Review Decision:

After a certain period of implementation, review the decision to see if it’s delivering the expected results. If not, understand why and adjust as needed.

Decision Stories

Decision Stories – Helping Managers to Make Quick Yet Consistent Decisions

Reading Time: 2 minutes

Since my last post about Decision Stories. posted here, I received some feedback about the usefulness of Decision Stories as a product. The main critique is about the ease of use and adoption of this tool. First, it’s not easy to use. As in people need to invest too much effort in thinking about how they made decisions before. And sometimes it’s not easy for the person to recall how they made the decision. They might not be aware the process even if they try to recall. So there needs to be processes to guide for the user to think about how they make those decisions. I will expand these ideas in a later post.

What I want to talk in this post is about is the second point and possible pivots. And that is not a lot of people are making life changing decisions in their personal life. If not many people are using the product on a daily basis, it’s even harder for them to justify the initial investment of effort to record their decisions.

One of the suggestions is to use this tool to help mid-level managers to make everyday decisions, so they can spend more time on other tasks such as personal and employee development. There are potential benefits to this approach. Most decisions made by mid-level managers involve technical decisions that have data to back it up. These decisions are more quantifiable and involve less “gut feeling”. It’s easier for the model to pick up quantifiable features and make reliable predictions.

The type of decisions made by mid-level managers are more homogenous. For example, a customer service call center manger generally worries about the average handle time of calls, and don’t worry about marketing strategies of product lines. Whereas an executive manager needs to think about how the change in product line will impact the type of customer service changes in call centers branch of the customer service organization. And since a consistent decision type will more likely generate accurate decision predictions, this is more likely to work for mid-level managers and not executive levels.

A bonus for the mid-level managers to have consistent decisions is clarity in management expectations. If the managers are comfortable to share the predictions, it could be used to manage expectations within their department. For example, if the employees knows exact how their manager is going to react to a specific scenario, they could adjust their expectations accordingly and synchronize efforts to maximize results. This of course needs trust building between the manager and their direct reports prior to deployment. Ironically, well trusted teams probably already have clear expectations between team members. It’s the new team members that need help the most.

And of course we have to mention that any model prediction involving human decisions requires sanity checks. Blindly trusting a model decision without thinking about consequences invites disasters. Before people start using this more or less automatically, both the manager and their direct reports needs to understand the risk. And if they don’t feel comfortable using the tool, they should be compelled to use it.

I’m interested in what other pros and cons about this application people can think of. I would love to hear any suggestions!

Here is a pitch deck for the product.

Decision Stories

Decision Stories – The Introduction

Reading Time: 2 minutes

About a month ago, I introduced an idea I had about retaining the intelligence of a human. Rather than building an AI to mimic humans, I want to capture how a human would make decisions. Because I think the decisions we made say a lot about who we are as a person.

So I went after the idea and developed a prototype. I wrote some decision stories or scenarios in life where one needs to make a decision. The decisions have the potential to shape who we are as a person. Each decision story has 4 choices one can choose from. They are not black-or-white decisions. Each decision could be a valid choice, depending on how the person chooses to live their life. Once the person made enough choices from various decision stories, the algorithm could help the person make a choice on a never seen scenario. This could help us make decisions in the event we are either unsure or don’t want emotion to influence our decisions. Or if we recorded the decisions of our role models and would like to know what decisions would they make in the same situation. You can actually ask, “What would X do?”. This is still a work in progress, so I’m just piecing together many parts as we go.

The backend uses a model to summarize how the person made decisions before. And come up with a generalization of the person’s profile. Then use the profile to guide future decisions.

The steps to do this are as follows:

  1. First, go to this page to log in. https://www.panzoto.com/login/. This is used to track the decisions you made based on premade stories.
  2. Then go to this page to make as many decisions as possible. https://www.panzoto.com/add-decision/. I recommend at least 10 decisions, but it will work with just one decision if making decisions is inherently difficult.
  3. Then go to this page to type in the decision you would like to make. You can either provide several choices or simply ask a question. The model will return the choice number, or make up some reasonable choices based on the decisions you made before. https://www.panzoto.com/make-a-decision/.
  4. If you like to submit more stories to help to expand the decision story repository. Please submit those on this page. https://www.panzoto.com/submit-your-decision-story/. Keep in mind to use gender-neutral references when building your story. Refrain from including personal detail in the stories. Make the stories as generic as you can. The stories will be approved by a human.

DISCLAIMER: By using this service, you assume the potential risk of asking a computer model to make suggestions to you. This is meant to be helpful and not inflict harm. It should be used in the case where you seriously considered the pros and cons of at least 2 decisions, but were still unable to make the decision because of your emotion. Please don’t ask for ethically questionable decisions. The model has filters for harmful language and self-inflicting harm. In general, please don’t ask questions that will get this service shut down.

Decision Stories

Decision Stories

Reading Time: 2 minutes

I was always fascinated with artificial general intelligence. In particular, I’m constantly thinking about how to generate a computer equivalent of how humans think and act. It has been difficult to make such systems because the current deep learning techniques is focusing on detection techniques like vision and NLP. Not much is focused on how humans interpret information and make decisions. So I thought about what things can be useful to the masses, yet still reflect how the human brain works.

The project I come up with is in the form of presenting a short story to the user and asking how they make decisions based on the story. When the user needs to make similar decisions in the future, there is a repository of similar stories and decisions made previously by the user, and the repository can be used to mimic the closest decision the user would have made. Although not exactly replicating the intelligence of the user, if we can make similar decisions as the user, then the end result is similar in terms of impact on their life.

Our adult lives are filled with decisions we have to make. As much as we like to, we don’t normally change how we make decisions, unless there is a life-altering event, such as a near-death experience or the loss of a family member. If we can capture how people make decisions, and able to replicate those decisions, then it’s as if we replicated the person’s behavior and values.

It may also be useful to the individual, in exposing the way that they make decisions, and asking if they like to make changes to how they are making those decisions. Pointing out the consequences of the decision-making sometimes can trigger the changes they need, without going through the life-altering experiences

(To be continued …)

Misc

ChatGPT: interactive Wikipedia that really know how to code

Reading Time: 2 minutes

It seems that there is a endless supply of large language models being applied to interesting situations. This time it’s a lot closer to everyday life than the previous models. OpenAI released the open source beta of ChatGPT, a chatbot backed by GPT3.5, that can answer scientific questions about the world, generate recipes, and write better than average code. Here is the blog to explains how it works. https://openai.com/blog/chatgpt. And if you have an OpenAI account, here is the beta. https://chat.openai.com/chat

I was fortunate enough to get to test the product this week and it’s surprisingly user friendly. There is just something about the chatbot framework that really intrigues me. Maybe it’s my strong urge to engage in interesting conversations. It’s able to answer complicated scientific questions regarding quantum mechanics, to biological pathways, to mathematic concepts. You do have to ask it to give examples and further explain to get more detailed information. However, if you are an expert in the field, you may find the information too shallow. For example, I published scientific papers previously on the biological pathways that control pupil dilation in rodents. I was not able to get detailed information on the level of scientific papers. This might not be a bad thing, since a flawed model called Galactica was introduced a few weeks ago. It was trained on scientific papers to generate texts like scientific papers. The questionable outcome is authoritative text with obviously wrong information. Being humble works better in this case.

I also tested on math concepts such as Taylors series and Fourier’s transform. It was able to give good explanations and examples. Another strong suit the model provide is the ability to generate above average programming code. It’s not surprising since previous GPT models have been used in the Copilot product to generate code and documentations. Yet it’s still nice to see the model include documentation and explanations of the generate code. On the side of more daily tasks, it is also able to generate cooking recipes that seems to be reasonable. Although I have not test the actual recipes yet.

Regarding limitation, I found some questions that model refuse to give an answer or not able to give an answer. For example, when I ask it to give ethically questionable instructions, the model will refuse to give those instructions. Or if I ask the model to tell me about things that changes or hard to determine, then it will refuse to answer. For example, if I ask what’s the bright start next to the moon is called, it will tell me that it depend on time and location.

All and all, it feels like a interesting tool to test and do more research. People have mentioned that this model feel like a reasonable educational tool when developed properly. And since there are so few AI product applied to education, I really wish more educational product can be developed using this model.

View More