In recent years after emerging of LLM in the space of NLP, tools like ChatGPT, BARD, DALL-E, took over the market and daily lives. We will be focusing on the human like output capability from LLM’s. There is a huge reservation of this technology due to multiple scenarios like security, accuracy, relevance, etc… In this paper, I will talk about a method which I have designed to fine tune a human being to lower the expectation from LLM outputs and increase the acceptance rate of the final product. This technique is more of a psychological method than a technological way to improve the models output to be more human like.