Tag: chatgpt

  • So long Sora, ChatGPT pulls the plug on AI video generation platform amidst a $1 billion dollar pull out by Disney

    So long Sora, ChatGPT pulls the plug on AI video generation platform amidst a $1 billion dollar pull out by Disney

    Yesterday, OpenAI officially pulled the plug on Sora, its AI video generation platform that launched to enormous fanfare just six months ago. The standalone app, the API, and all video generation features within ChatGPT are being shut down. At the same time, the billion-dollar licensing partnership with Disney has been dissolved. It is a dramatic reversal for a product that once topped the App Store charts and seemed poised to reshape digital content creation.


    Meanwhile, on the other side of the world, ByteDance’s Seedance 2.0 continues to push the boundaries of what AI video can do. The contrast between these two trajectories tells us a great deal about the current state of AI, the pressures shaping the industry, and what businesses should be thinking about as they plan their technology strategies.


    OpenAI’s Sora debuted its second-generation model in September 2025 with a dedicated consumer app that combined AI video creation with a social media feed for sharing content. The results were impressive. Downloads surpassed one million within ten days, outpacing even ChatGPT’s early adoption curve. The app quickly became the top free download in the App Store’s Photo and Video category.


    But that momentum did not last. By January 2026, downloads had dropped by roughly 45%. Users experimented with the novelty, generated a wave of viral clips featuring copyrighted characters and public figures, and then largely moved on. The app generated only about $2.1 million in in-app purchases over its lifetime, a negligible figure for a company valued at $730 billion. More critically, Sora was consuming enormous amounts of computing power at a time when OpenAI is under pressure to consolidate resources ahead of an expected IPO and intensifying competition from rivals like Anthropic and Google.


    An OpenAI spokesperson explained the decision by saying the company is narrowing its focus and redirecting compute toward robotics research and its core text and reasoning products. CEO Sam Altman reportedly told employees that ending Sora would free up resources for the company’s next-generation AI models. The message here is clear: when the runway is long but the burn rate is high, experiments that are not gaining traction get cut.


    While Sora exits the stage, ByteDance’s Seedance 2.0 remains very much alive. Released in February 2026, the model quickly drew global attention for producing cinematic-quality video with synchronized audio from simple text and image prompts. Clips featuring hyperrealistic depictions of celebrities and well-known characters went viral almost immediately, prompting cease-and-desist letters from Disney, Paramount, Netflix, and Warner Bros., along with sharp criticism from SAG-AFTRA.


    ByteDance responded by pledging to strengthen its intellectual property safeguards and suspending a controversial feature that could clone a person’s voice from a single photograph. The company also paused the planned global launch of Seedance 2.0 through its CapCut platform while it works through copyright compliance issues. Despite these setbacks, the underlying model continues to operate within China’s domestic ecosystem.


    For users outside of China, accessing Seedance 2.0 is not straightforward. The full-featured version of the model is currently available only through ByteDance’s Chinese apps, including Jimeng and Doubao, which require a mainland Chinese phone number for registration. International users looking to try the model have been turning to VPN workarounds, typically setting their location to Hong Kong or mainland China and navigating Chinese-language interfaces. Some third-party platforms and API aggregators have also offered access, though availability has been inconsistent as ByteDance tightens controls. The international version of ByteDance’s creative platform, Dreamina, offers a limited version but has not yet rolled out full Seedance 2.0 capabilities to the general public.


    One factor that may help explain why Seedance continues to thrive while Sora folds is the dramatically different public sentiment toward AI in China compared to the West. Multiple large-scale surveys conducted in 2024 and 2025 paint a consistent picture: Chinese citizens are far more accepting of and optimistic about artificial intelligence than their counterparts in North America and Europe.


    Stanford’s 2025 AI Index Report found that 83% of people in China believe AI products and services offer more benefits than drawbacks. Compare that to just 39% in the United States and 40% in Canada. An Edelman survey from late 2025 reported that 87% of Chinese respondents said they trust AI, versus 32% in the U.S. and 36% in the U.K. A joint study by the University of Melbourne and KPMG, which surveyed over 48,000 people across 47 countries, found that 93% of employees in China are using AI for their work, far outpacing the global average of 58%. The same study noted that 54% of Chinese respondents actively embrace greater use of AI, compared to just 17% of Americans.


    This cultural receptivity creates a very different operating environment for AI companies. In the United States, Sora was met with sustained backlash over deepfakes, copyright infringement, and the potential displacement of creative workers. Hollywood unions, family estates of public figures, and advocacy groups all pushed back forcefully. In China, while there are certainly regulatory constraints and some public concerns around privacy and consent, the broader population views AI development as a national priority and a source of opportunity rather than a threat. That kind of public goodwill gives companies like ByteDance more room to iterate, experiment, and build a user base for products like Seedance without facing the same intensity of cultural resistance.


    At Valley Techlogic, we want to make sure these developments are on your radar. Here is what we think matters most:

    • AI video tools are not going away. Sora’s shutdown does not signal the end of AI-generated video. It signals that the market is maturing and consolidating. The technology is real, and competitors from China and elsewhere are advancing rapidly.
    • Copyright and compliance risks remain front and center. Both Sora and Seedance ran into serious intellectual property disputes. Any business exploring AI-generated content needs clear policies, legal review, and an understanding of where generated material comes from.
    • VPN-dependent tools carry their own risks. If members of your team are experimenting with Seedance or similar tools through VPN workarounds, be aware of the security, compliance, and data privacy implications. Routing traffic through unfamiliar networks and registering on foreign platforms introduces risk that should be managed deliberately.
    • Compute costs drive real business decisions. OpenAI shut down a product used by millions because the computing costs could not be justified. This is a reminder that AI infrastructure is expensive, and the tools you rely on today may not be available tomorrow if the economics do not work out (or they may become dramatically more expensive).
    • Stay informed, stay cautious. The AI landscape is shifting fast. We recommend evaluating any AI tools your organization adopts with an eye toward longevity, data handling practices, and vendor stability.

    The divergent paths of Sora and Seedance illustrate how quickly the AI industry is evolving. A product can go from record-breaking downloads to discontinuation in under a year. Meanwhile, cultural attitudes toward AI vary so dramatically across borders that a tool deemed too controversial in one market can find a welcoming audience in another.


    For businesses, the lesson is not to chase every new AI tool that generates headlines. It is to build a thoughtful technology strategy with trusted partners who can help you navigate the noise, manage risk, and adopt the tools that will genuinely move your operations forward.


    If you have questions about how any of these developments affect your organization, or if you want to talk through your AI adoption roadmap, we are here to help. Schedule a consultation today.




  • Cloud Waste and Other Technology Spending Snafu’s That Could Be Keeping Your Tech Spending Skyhigh
  • Anthropic’s AI product Claude experienced a surge in new subscribers after they told the government “no” to removing safeguards, a new look at AI ethics
  • Government backed cybersecurity agency CISA down to just 38% of its optimal staffing levels after funding cuts, what it means for your business
  • This article was powered by Valley Techlogic, leading provider of trouble free IT services for businesses in California including Merced, Fresno, Stockton & More. You can find more information at https://www.valleytechlogic.com/ or on Facebook at https://www.facebook.com/valleytechlogic/ . Follow us on X at https://x.com/valleytechlogic and LinkedIn at https://www.linkedin.com/company/valley-techlogic-inc/.

  • Anthropic’s AI product Claude experienced a surge in new subscribers after they told the government “no” to removing safeguards, a new look at AI ethics

    Anthropic’s AI product Claude experienced a surge in new subscribers after they told the government “no” to removing safeguards, a new look at AI ethics

    Artificial intelligence companies are quickly discovering that ethics is not just a philosophical debate. It is becoming a market decision.


    Recently, Anthropic, the company behind the AI assistant Claude, reportedly saw a surge in new subscribers after refusing to weaken certain safety safeguards in response to government pressure. The situation has sparked a broader conversation about how AI companies balance regulatory demands, safety systems, and public trust.


    For businesses and everyday users who rely on AI tools, the moment highlights a bigger question. Who decides how powerful technology should behave?


    Anthropic publicly indicated that it would not remove or weaken several built-in safeguards designed to prevent harmful or unsafe outputs from its Claude AI system. These safeguards are part of the company’s long standing focus on what it calls “constitutional AI,” a framework designed to make the model behave according to defined ethical guidelines.


    After the company made its position clear, reports surfaced that Claude experienced a noticeable spike in new users and paid subscribers. Many users interpreted the decision as a sign that Anthropic was willing to prioritize safety and transparency rather than bending to outside pressure.


    The government’s request reportedly included opening the product up to mass surveillance and autonomous weapons. A growing number of users want AI tools that demonstrate clear ethical boundaries and Anthropic released this statement as a direct response to the Department of War’s request.


    At the same time, OpenAI took a different path. The company agreed to certain government conditions and partnerships intended to shape how its AI systems are deployed and governed.


    Supporters argue this collaboration helps ensure national security oversight and responsible AI development. Critics worry that deeper cooperation between AI companies and governments could lead to more influence over how these systems behave.


    This contrast between Anthropic and OpenAI has fueled debate within the technology community. One company chose to publicly resist modifying safety controls, while the other agreed to work within government defined frameworks. Neither approach is necessarily simple. Each reflects a different philosophy about how powerful AI technology should be managed.


    Artificial intelligence systems are quickly becoming embedded in business operations, software development, cybersecurity analysis, and everyday productivity tools. Decisions about safeguards are not theoretical. They directly influence how these systems behave in real world environments.


    When companies decide whether to weaken or strengthen safety systems, several factors come into play.

    • Public trust in the platform
    • Legal and regulatory pressure
    • National security concerns
    • Competition between AI providers
    • Ethical responsibility for how the technology is used

    The recent surge in Claude subscribers suggests that a portion of the market is paying close attention to how AI companies handle these decisions. Users are no longer just comparing features, they are comparing values and whether the products they’re supporting with their hard earned money align with those values.


    The AI industry has moved far beyond experimental research. It is now a competitive marketplace where reputation matters.


    Companies that demonstrate transparency about safety practices may gain credibility with customers who are concerned about misuse, misinformation, or privacy. At the same time, companies that cooperate closely with governments may gain regulatory stability and access to major contracts. Both strategies will likely continue to shape the next phase of the AI market.


    Anthropic’s experience shows that ethical positioning can directly affect adoption. When users believe a platform is protecting safety standards, they may be more willing to trust it with their data, workflows, and decisions.


    For organizations using AI tools, the takeaway is not about picking sides between companies. The real lesson is that governance around AI is evolving rapidly.


    Business leaders should be asking a few key questions when adopting AI platforms.

    • What safeguards are built into the system
    • Who influences how the system behaves
    • How transparent the vendor is about safety policies
    • Whether the company has a clear ethical framework

    AI is quickly becoming part of everyday business infrastructure. Just like cybersecurity or data privacy, the policies behind the technology matter.


    The recent attention surrounding Anthropic and OpenAI is a reminder that the future of AI will not only be defined by capability. It will also be defined by the choices companies make when pressure arrives.


    And as Claude’s subscriber spike suggests, users are paying attention. If evaluating AI tools for your business is a priority for 2026, you’re not alone. We have had collaborative conversations with our clients at an increasing rate as they look for AI solutions that fit their needs and align with their company mission statements, and we help them address those evaluations from a technical standpoint. Learn more today with a consultation.




  • Government backed cybersecurity agency CISA down to just 38% of its optimal staffing levels after funding cuts, what it means for your business
  • The biggest risk to your business might be a past employee, our guide to offboarding a past employee properly
  • Starting next month, you’ll need photo ID to fully access Discord and users are not happy
  • The Verizon outage that left more than a million without cell service yesterday is fixed, but what caused it?

    This article was powered by Valley Techlogic, leading provider of trouble free IT services for businesses in California including Merced, Fresno, Stockton & More. You can find more information at https://www.valleytechlogic.com/ or on Facebook at https://www.facebook.com/valleytechlogic/ . Follow us on X at https://x.com/valleytechlogic and LinkedIn at https://www.linkedin.com/company/valley-techlogic-inc/.

  • Microsoft 365 Business Premium with Copilot Included? This new SKU makes integrating AI into your business more affordable and accessible

    Microsoft 365 Business Premium with Copilot Included? This new SKU makes integrating AI into your business more affordable and accessible

    In 2026, AI has cemented its place in businesses in helping employees achieve more with their time. However, which tool employees choose to use is still a matter of debate for most businesses (and sometimes, even if an approved tool is in place employees will still choose to use something else).

     

    There are some risks involved with allowing employees to choose their own AI tools, AI models in general are trained not only on the data that engineers put in from the start, but also on the data they’re fed from users. This means if your employee shares private or proprietary data with AI, that data is for all intents and purposes now exposed to the internet at large.

     

    That’s where Microsoft’s Copilot 365 product originally came to be, to solve this problem by allowing businesses to set rules within their Microsoft tenant on how and when data is shared (including not sharing any data at all with learning models). However, there was a significant upfront cost for this service initially that may have been off putting to businesses only dipping their toes into the AI arena for the first time.

     

    At launch, Microsoft’s Copilot 365 was $360 a year per user, ensuring any business that chooses to use it would be fully locked into the product for a full year. Now, not only is there a month-to-month option ($31.50 per year) they have also released a SKU that combines Microsoft’s Copilot 365 with Microsoft Business Premium (which many businesses already have for the superior protection included that are not found under the Basic and Standard SKUs). This product is available for the discounted price of $45.15 (compared to $54.60 to purchase them separately). You still must sign up for an annual commitment but the month-to-month flexibility should help with businesses trying to get a handle of their technological costs.

     

    Microsoft’s Copilot is a superior product to other AI tools on the market (including those aimed specifically for business users) in the following ways:

     

      • Direct Integration: Embedded directly in Outlook, Word, Excel, PowerPoint, Teams, and OneDrive, no separate tools, logins, or workflows.
      • Understands Your Organization’s Data: Uses your existing Microsoft 365 tenant data (emails, files, chats, calendars, meetings) with permissions fully respected.
      • Context-Aware Email & Communication Assistance: Drafts, summarizes, and replies to emails using real conversation history, attachments, and meeting context.
      • Document Creation & Refinement: Generates, rewrites, summarizes, and formats Word documents based on your internal files and past work, not generic templates.
      • Excel Analysis (Without Formulas): Analyzes data, explains trends, builds summaries, and generates formulas using plain English instructions
      • PowerPoint from Existing Content: Creates presentations from Word documents, notes, or OneDrive files, automatically structuring slides and speaker notes.
      • Smarter Meetings in Microsoft Teams: Summarizes meetings, highlights action items, tracks decisions, and answers questions about what was discussed—even if you joined late.
      • Real-Time Business Q&A: Ask questions like “What did we decide about Project X?” or “Summarize last quarter’s client issues” and get answers sourced from your tenant.
      • Security & Compliance Built In: Honors Microsoft 365 security controls, data boundaries, retention policies, and user permissions, no data used to train public models.
      • No Disruption to Existing IT Controls: Managed through Microsoft 365 admin tools, licensing, and policies you already use.

     

    In a nutshell, it’s not a good idea to allow your employees to select their own AI tools, by selecting Copilot you’re safeguarding your companies’ data while giving them a tool that integrates directly with their day-to-day activities.

     

    If rolling out AI in your business is still a priority in 2026, Valley Techlogic has strived to stay at the forefront of new and exciting changes in AI. We are able to craft an implementation plan that works with your business while addressing concerns like data safety and employee adoption. Learn more today through a consultation.

  • Cars, coding… and healthcare? AI behemoths such as OpenAI and more look to diversify their products into applicable categories, but to what end?

    Cars, coding… and healthcare? AI behemoths such as OpenAI and more look to diversify their products into applicable categories, but to what end?

    New year, new changes to the AI product approach? We’re just a week into 2026 and already there have already been major changes in the AI space, including product lines diversifying into major categories to aid users more specifically in their querying approach, but first we do want to go off on a small tangent about one approach to AI that’s seeing more traction – self driving cars.

    CES 2026 is currently holding their annual mega popular conference in Las Vegas filled to the brim with AI innovation, advancements in robotics, and updates to the consumer technology space just to name a few of their many categories but one thing was clear across the board for car industry specifically – self driving vehicles are still very much on the agenda for 2026.

    Uber announced in partnership with EV maker Lucid that robotaxis are currently being tested and that a rollout in San Francisco to start is likely to begin this year (with some vehicles already being road tested there as we speak). These vehicles aim to increase passenger safety with AI updates that include a roof-mounted “halo” that improves sensor visibility, spotting hazardous conditions quickly to avoid crashes. These vehicles will use Uber’s proprietary self-driving technology Nuro, and they say they hope to deploy 20,000 or more self-driving vehicles across major cities over the next six years according to current reporting. Time will tell how they will approach competition from Waymo (owned by the Alphabet Company which also owns Google) who launched the first self-driving taxi service all the way back in 2009 and has become synonymous with the concept.

    Next, Google aims to move past just “vibe coding” with a product aimed specifically at full fledged software developers, Google’s coding product labeled “Antigravity” sneakily launched just before Thanksgiving and some senior software engineers are already providing feedback as to how it competes with existing products aimed at coders in the marketplace (such as Cursor which has tie ins to OpenAI, NVidia, Adobe and more). Antigravity separates itself from Google’s flagship AI product Gemini by being solely aimed at coding applications and even allows users to differentiate between frontend, backend and full stack development when prompting.

    Users say it still struggles when given incomplete or narrow prompts but when given a senior level prompt the results have risen to the level of even being production ready. Users also mention there’s less instances of it “going off script” as they’ve found with Gemini and other AI tools less singularly focused on coding. As with most AI tools in 2026 time will tell how it increases efficiency and productivity for the userbase.

    Finally, OpenAI just announced ChatGPT Health, brushing past earlier inferences that users should NOT use AI for diagnosis (which to be fair is still their stance in a roundabout way). ChatGPT Health will provide supportive, non-diagnostic healthcare advice and is not intended to be a replacement for healthcare services or visiting your doctor. Rather, they say they want to improve patient understanding of medical verbiage and center themselves as a patient “ally”. By their own estimates up to 40 million queries a day are health related, which does signal there is market interest in a product like this but whether it can be used safely and effectively (and can still encourage users to seek out actual medical care when warranted) remains to be seen.

    There is already some backlash being received for the product as ChatGPT mentioned it will have the ability to connect to actual healthcare systems and even receive patient records which are ordinary protected by HIPAA but may lose that protection when voluntarily provided by the user to a third-party like ChatGPT. There is no official launch date as of writing, but users can sign up to be part of the demo now.

    In a nutshell, we’re seeing AI products move away from a catchall basis into more specific categories, perhaps to better answer those specific queries and have less hallucinatory experiences (which is still a major problem in 2026)? Again, time will tell.

    As AI becomes more customizable and more powerful in 2026, the real advantage comes from applying it correctly. Valley Techlogic helps businesses design AI solutions around their actual workflows and goals, not generic hype. We continuously invest in emerging technologies so our clients can move forward with confidence. Learn more today with a consultation.

    This article was powered by Valley Techlogic, leading provider of trouble free IT services for businesses in California including Merced, Fresno, Stockton & More. You can find more information at https://www.valleytechlogic.com/ or on Facebook at https://www.facebook.com/valleytechlogic/ . Follow us on X at https://x.com/valleytechlogic and LinkedIn at https://www.linkedin.com/company/valley-techlogic-inc/.

  • Chat GTP-5 is here, and opinions are mixed, we talk new features and why some users say 4 was the better version

    Chat GTP-5 is here, and opinions are mixed, we talk new features and why some users say 4 was the better version

    We reported on ChatGTP-5, code named Project Strawberry at the time, nearly one year ago today. The reported update was supposed to boost reasoning capacity and begin the transition of introducing self-learning to AI versus requiring vast swaths of data scrubbed from the internet (a distinction likely aimed to combat the obvious problems when you randomly collect data from unknowing and many times unwilling sources).

    With a potentially industry changing copyright lawsuit filed just this week, the race to set AI apart as a distinct tool separate from the data it was built on is in full swing and as usual OpenAI’s ChatGPT product is leading the charge.

    New features include the ability to handle text ,images, voice and video all within a single conversation, so there’s no longer a need to switch between text chats and chats when you would like to analyze files. It’s also being reported so far that the answers users are receiving are more accurate, especially for technical questions and that it can now answer with much greater detail.

    Although it should be noted some of this improved reasoning is locked behind a paywall, with free users receiving the “basic” version of the model or ChatGPT-5 mini as dubbed by OpenAI themselves. Plus users will receive an improved version with one caveat, when load is high the company has said all users will only have access to the mini version to keep services afloat.

    It’s not all sunshine and rainbows however, some users aren’t thrilled with the update and have even requested the ability to return to Chat-GPT4. Common complaints are that Chat-GPT5 is much slower than 4 was and there is more frequent crashing (whether it be within the client itself or ChatGPT crashing user’s browser tabs).

    There have also been complaints that the model is more patronizing now, with users receiving praise for every query and even changing the personality or directly requesting it to leave the compliments out is outright is mostly ignored by the model at the time of reporting.

    We aren’t sure what the outcome of a successful copyright lawsuit will mean for the future of AI but as a technology provider we suspect it will stick around in some capacity regardless of the success or failure of ongoing litigation. While the creative uses for AI such as image generation may be more at play the key functionality for businesses as a means of increasing productivity are what we like to focus on. Here are three ways you can utilize AI in your business today:

    1. Inbox & customer-support copilot
      What it does: summarizes long threads, drafts tailored replies, and suggests next steps so you clear the queue quicker.
      Try this prompt (paste an email thread under it):
      “Summarize this thread in 3 bullets, list the customer’s main concern, and draft a friendly 120-word reply that (a) acknowledges the issue, (b) proposes a solution, and (c) offers a next step. Keep it on-brand: helpful, concise, no jargon.”
      Pro tip: Save a few tone/style notes once and reuse them for consistent replies.
    2. SOPs, checklists, and onboarding in minutes
      What it does: turns rough notes into step-by-step procedures, checklists, and quick-start guides for new hires.
      Try this prompt (paste your messy process notes):
      “Turn this into a clear SOP with: purpose, prerequisites, step-by-step actions (numbered), decision points, common pitfalls, and a 5-question quiz to confirm understanding. Make it skimmable.”
      Pro tip: Ask for a one-page version and a printable checklist for the wall.
    3. Spreadsheet/data sidekick (Excel/Sheets)
      What it does: writes formulas, cleans lists, and gives quick insights so you stop hunting Stack Overflow.
      Try this prompt (describe your sheet):
      “I have columns: Date, Lead Source, Deal Size, Status. Give me (1) a formula to count won deals per month, (2) a chart I should make and why, and (3) three insights I can present in one sentence each.”
      Pro tip: Paste a few sample rows so it can generate formulas that fit your exact layout.

    Ready to turn AI into real productivity? At Valley Techlogic, we can help you plug Chat GPT-5 into the tools you already use, Microsoft 365/Teams, Outlook, SharePoint (or Google Workspace so it drafts emails, turns rough notes into SOPs, and tames spreadsheets right where work happens. Learn more today with a consultation.

    Looking for more to read? We suggest these other articles from our site.

    This article was powered by Valley Techlogic, leading provider of trouble free IT services for businesses in California including Merced, Fresno, Stockton & More. You can find more information at https://www.valleytechlogic.com/ or on Facebook at https://www.facebook.com/valleytechlogic/ . Follow us on X at https://x.com/valleytechlogic and LinkedIn at https://www.linkedin.com/company/valley-techlogic-inc/.

  • Are you all in on AI or approaching it more moderately? The perils of not strategizing your AI roll out

    Are you all in on AI or approaching it more moderately? The perils of not strategizing your AI roll out

    AI (Artificial Intelligence) continues to proliferate modern workspaces, with some companies leaning heavily into AI investments including up to replacing human workers with an AI equivalent for roles such as customer service.

    One company, Klarna, is facing some pushback from investors for just such a strategy. Last year, Klarna which is known for it’s “buy now, pay later” financing for consumer purchasing, replaced 700 workers in favor of an AI solution for customer support. Now, their valuation has plummeted from a high of $45.6 billion in 2021 to $6.7 billion in 2025.

    At the heart of it is customer complaints of lower customer service satisfaction which has caused the company to pivot on their “AI First” strategy with their CEO Sebastian Siemiatkowski stating recently “Really investing in the quality of the human support is the way of the future for us.”

    What does this mean for medium and small businesses looking at their own strategizing when it comes to artificial intelligence? Testing the waters and applying it in moderation to start is key to a successful AI roll out.

    While it may seem tempting to just go all in, especially if savings are on the table in terms of labor costs, the current iterations of artificial intelligence are not ready to be deployed without human oversight and intervention in our opinion. Rather than expecting AI to take over and replace human activities, it’s best to look at how you can use AI as a tool to do more.

    Here are three ways we recommend using AI to get the most out of your workday:

    1. Automating Repetitive Tasks
      AI can handle time-consuming activities like data entry, scheduling, and basic customer queries. This frees up employees to focus on higher-value, strategic work that requires human judgment and creativity.
    2. Enhancing Decision-Making
      AI-powered analytics tools can process vast amounts of data quickly and provide actionable insights. This helps employees make faster, more informed decisions without spending hours combing through spreadsheets or reports.
    3. Personalizing Training and Support
      AI can tailor learning experiences to each employee’s role and pace, recommending relevant skills development or providing just-in-time answers through intelligent chatbots. This boosts engagement and accelerates on-the-job learning

    If developing an AI strategy for your business is a priority for you in 2025, Valley Techlogic can help. We make it a priority to stay at the forefront of emerging technologies and help our clients access continuous improvements in the tech space to meet their goals. Reach out today for a consultation.

    Looking for more to read? We suggest these other articles from our site.

    This article was powered by Valley Techlogic, leading provider of trouble free IT services for businesses in California including Merced, Fresno, Stockton & More. You can find more information at https://www.valleytechlogic.com/ or on Facebook at https://www.facebook.com/valleytechlogic/ . Follow us on X at https://x.com/valleytechlogic and LinkedIn at https://www.linkedin.com/company/valley-techlogic-inc/.

  • 6 AI Do’s and Don’t’s Including ways you may be jeopardizing your workplace data with your AI use (and how to avoid)

    6 AI Do’s and Don’t’s Including ways you may be jeopardizing your workplace data with your AI use (and how to avoid)

    AI or Artificial Intelligence is becoming more and more common place in our daily lives, including in our places of work. You may even be using it daily without realizing it, most search engines for example have an AI response to queries baked in at the top of the page and if that’s the farthest you look then all of your searches are currently being powered by AI.

    Other tools like weather apps, navigation and even the spam filter in your inbox is using AI to train and collect data that is then given back to you as answers to your questions or provide solutions you are looking for. Drive a Tesla? All of your driving data is collected and used to train their autonomous car algorithms.

    Which brings us to the topic of today’s article, AI in general is powered by give and take. The models collect our data and turn that data into answers, it’s a common misconception that AI is producing the answers all by itself. Machine learning operates on a rule of 10, basically for every query you need 10 ways to respond, and those responses are collected by unfathomable amounts of data fed into it. Think of the breadth of knowledge an AI program like ChatGPT seems to have and you can begin to see that it would take a lot of data for it to provide to answers to millions of different questions it’s asked each day.

    So that data comes from you, and me, and everyone who’s ever interacted on the internet in a meaningful way. It’s not necessarily a bad thing, after all humanity tends to accomplish its greatest achievements when we all work in unison towards a goal. Although the way that the data is collected and how to approach things like copyright are still being determined.

    So, with all that said you might be wondering, what’s the problem? What should I be worried about when using AI in my workplace? As a technology company, we believe in using the tools available to streamline and strengthen our productivity, but we have determined that companies should be aware of these three things when using a burgeoning technology like AI in their workplace:

    1. Data Risks: As we hinted at above, AI systems tend to syphon as much information as they can to strengthen their machine learning algorithms. This includes potentially sensitive data. Any AI strategy should include how to protect and segment data you don’t want leaked to the outside world.
    2. Errors and Reliability: There are risks to trusting AI completely when looking for answers, AI data sets are fed by a wide range of sources and not all of them are trustworthy. You should always vet any answers you receive, especially if the question you’re asking is an important one.
    3. Bias, Discrimination and Transparency: Most of the AI tools currently on the market are being created by private companies and the processes used are hidden from outside view, so we should keep in mind that it’s possible the answers we’re receiving have been manipulated to reflect a certain outcome. Again, always vet the answers you receive from AI.

    Now that we’ve touched on the things to look out for, what are three things that you can safely use AI for in your workplace?

    1. Use a local AI model: Most people are not aware you can actually have a local in-house AI model, these may be more limited in scope but will not present the security risk of public facing AI and can be built on your own data.
    2. Automating repetitive tasks: Certain tasks won’t carry any risk of data exposure, such as scheduling or creating reports without PII (Personal Identifying Information).
    3. Use it to interact with customers: One of the best use cases of AI currently for businesses is automated chatbots, chatbots can be available 24 hours a day and field simple questions and answers which free up your staff for other activities.

    If you’re looking for the most practical and safest way to begin using AI in your business, Valley Techlogic can help. We are experienced in creating customized technology solutions for our clients and can advise on the way to implement an AI plan that doesn’t compromise on cybersecurity best practices. Reach out today for a consultation.

    Looking for more to read? We suggest these other articles from our site.

    This article was powered by Valley Techlogic, leading provider of trouble free IT services for businesses in California including Merced, Fresno, Stockton & More. You can find more information at https://www.valleytechlogic.com/ or on Facebook at https://www.facebook.com/valleytechlogic/ . Follow us on X at https://x.com/valleytechlogic and LinkedIn at https://www.linkedin.com/company/valley-techlogic-inc/.

  • China enters the AI race with the release of DeepSeek, prompting conversations about what happens when AI tools take data from each other (rather than just the general public)

    China enters the AI race with the release of DeepSeek, prompting conversations about what happens when AI tools take data from each other (rather than just the general public)

    The race for domination continues to heat up at China’s AI model “DeepSeek” enters the fray, just days after newly inaugurated President Trump announced his plans to invest 500 billion in AI infrastructure during the course of his term.

    Established as a startup under the same umbrella as the quantitive hedge fund High-Flyer, which is primarily owned by AI enthusiast Liang WenFeng (who built his fortune during the 2007-2008 financial crisis), little has been verified about how DeepSeek came to be.

    That has not stopped endless speculation since it’s launch was announced, including how much of it is modeled after existing AI models such OpenAI’s ubiquitous model, ChatGPT.

    Also being questioned is how the chips it was trained on were sourced, chip restrictions were placed in on China in 2019 which continued under President Biden specifically to curtail China’s ability to access infrastructure used in the advancement of AI technology. This restriction not only covered the chips themselves, but the technology used to manufacture them.

    According to Liang, he sourced the the 10,000 Nvidia A100 GPUs prior to the federally imposed ban.

    At present time the founders of DeepSeek are indicating that their goal is to continue the research and advancement of AI infrastructure with their model and not seek commercialization. To back these claims, you can currently download the first series of their model for free open source whether you’re a researcher or a commercial user.

    It should also be noted that DeepSeek has an updated data set as compared to ChatGPT, which is currently capped to data from 2023, what this means is its most recent data is from 2023 and before and anything that occurred in 2024 and beyond would not be available so if you were to example ask ChatGPT “Who won the 2024 Presidential Election?” it may not give you a correct answer.

    There have also been claims that DeepSeek is much cheaper to train, although training costs for existing AI models are largely inflated. These costs are based on the cost of cloud computing rental prices, which have a wide range of variance.

    AI training costs vary wildly depending on a range of factors.

    AI and cloud computing are both worthy investments for businesses looking to strategically position themselves for technology growth in 2025 and beyond, and Valley Techlogic is at the forefront of utilizing these technologies.

    Whether it be initializing AI tools like Microsoft’s Co-Pilot in your business or migrating more of your operations to the cloud to reduce overhead spending on physical hardware, we’ve got you covered. Reach out today for a consultation and learn how you can catapult your business forward with technology advancements through Valley Techlogic.

    Looking for more to read? We suggest these other articles from our site.

    This article was powered by Valley Techlogic, leading provider of trouble free IT services for businesses in California including Merced, Fresno, Stockton & More. You can find more information at https://www.valleytechlogic.com/ or on Facebook at https://www.facebook.com/valleytechlogic/ . Follow us on X at https://x.com/valleytechlogic and LinkedIn at https://www.linkedin.com/company/valley-techlogic-inc/.

  • Code named “Strawberry”, OpenAI’s latest update aims to boost reasoning capacity in their AI model

    Code named “Strawberry”, OpenAI’s latest update aims to boost reasoning capacity in their AI model

    Initially labeled Q* for Q Star, Project Strawberry is set to become ChatGPT5 and OpenAI is prepared to launch this update any day now at the time of writing.

    AI competition continues to stiffen up, but many would argue OpenAI has a commanding lead in the AI space with it’s a comparatively more mature model that many believe is more accurate than competitors such as Google’s Gemini and Microsoft’s CoPilot.

    However, as with most AI tools on the market, errors and just general wonkiness are part of the experience and OpenAI and other AI tool providers hope to continue to improve in that arena, providing more accurate results to users without errors and sometimes comical mistakes or “problems” as in the inaccurate word scramble below.

    Hint, no actual words can be spelled with all the letters given.

    OpenAI’s claims Project Strawberry will have “human-like” reasoning skills and answer questions that have stumped the algorithm so far, especially complex math and programming problems.

    This update is also leading up to the larger project for OpenAI, codenamed “Orion” this future tool will be an entirely new AI model and it’s going to be trained entirely via ChatGPT5/Project Strawberry. The hope with using AI to train AI rather than training it via data found online is that we will see a reduction in “AI hallucinations” (or incorrect predictions) and also speed up the rate at which AI can be improved without having to feed it large amounts of online data.

    This would also help OpenAI and other AI competitors avoid the murkier topic of privacy concerns when it comes to where AI is getting it’s data from.

    If you’re considering investing time in AI solutions for your business, we have 4 considerations for you to mull over first (and the first does relate to the data security topic we just mentioned):

    1. Data Security: For most of the AI tools on the market, it’s a known fact that any data you feed into it will also feed into the models they use. We would suggest if you’re considering implementing AI solutions in your business you do so with this factor in mind. Even if a tool claims your data will be secure and not accessible to other users (we’re looking at you CoPilot) AI is still realistically in its infancy. We suggest using an abundance of caution when it comes to data that is proprietary to your business.
    2. Cost (especially Time Cost): AI when used correctly can save your business time and money but using it correctly can be a high barrier to entry. For example, if copy writing is a core facet of your business AI is an excellent tool for sourcing ideas that your team can then spin off into their own creations. If your business is word-of-mouth based cabinetry sales, AI may not be very useful to you at this point in time.
    3. User Experience: Consider the human experience, especially for your customers, when implementing AI solutions. AI chatbots exist but should they replace a real person answering questions on your business’s behalf? Or instead does it make sense to utilize it half way (IE the chatbot answers common questions) with a human representative ready to take over if the questions get more complicated.
    4. AI is not magic: AI will not replace human ingenuity, as outlined above it’s not a perfect solution by any means at this point in time and that’s unlikely to change anytime soon. AI should be used to build upon existing structures in your business (like adding more capacity to your marketing capabilities for example) not with the expectation it’s going to replace those structures entirely.

    Considering implementing AI solutions in your business or hoping to hone in on advancements in technology to your advantage? Valley Techlogic continues to stay at the forefront of new innovations in tech, and we utilize our expertise on behalf of our customers. Reach out today for a consultation.

    Looking for more to read? We suggest these other articles from our site.

    This article was powered by Valley Techlogic, leading provider of trouble free IT services for businesses in California including Merced, Fresno, Stockton & More. You can find more information at https://www.valleytechlogic.com/ or on Facebook at https://www.facebook.com/valleytechlogic/ . Follow us on Twitter at https://x.com/valleytechlogic and LinkedIn at https://www.linkedin.com/company/valley-techlogic-inc/.

  • The biggest cyber security breaches of 2023

    The biggest cyber security breaches of 2023

    Now that it’s 2024 we’re reflecting on the biggest events in tech that occurred in 2024, and in today’s article we want to talk about the biggest cyber security breaches that occurred in 2023.

    Before we get into it, let’s talk about the hard numbers.  Across the board, cyber threats are up year over year and 2023 was no exception. Here are 8 eye opening statistics on cyber threats as of writing:

    1. The global average cost of a data breach is $4.45 million and a ransomware attack $5.13 million as of 2023.
    2. The average lifecycle (discovery to remediation) of a data breach is 277 days.
    3. 74% of data breaches still involve a human element in 2023.
    4. 64% of Americans have not checked to see if there data has been lost in a data breach.
    5. Almost half (46%) of all cyberattacks were on US targets.
    6. More than 1 million identities were stolen in 2023.
    7. 30% of those people were a victim of a data breach in 2023.
    8. 54% of office works express feeling “cybersecurity fatigue” in regards to news of data breaches.

    Unfortunately, public apathy towards cybersecurity preventions from ongoing, sustained attacks and the lucrative nature of successful attacks performed on business entities makes for a potent recipe in these attacks only continuing to increase in 2024.

    We want to take a look back at the biggest breaches that occurred in 2023 and also present our solution for preventing an attack of this nature from occurring to your business.

    1. MGM – Occurring in September, the unusual way MGM was breached made headlines because it did not initially involve a computer. Instead, attackers posed as people of importance to the company via a phone call and gained access to their systems, causing a loss of reputation, $100 million in damages, and 5 class action lawsuits to be filed.
    2. ChatGPT – Not even AI is safe when it comes to targeted attacks from hackers, in March of 2023 a bug in their source code exposed the personal information of a 1.2% of their Plus Subscribers including home addresses, full names and email addresses.
    3. MOVEit File Transfer System – The fallout from this breach that occurred in June 2023 extended far beyond the file system management software company itself, including California’s biggest pension fund holders CalPERS and CalSTRS.
    4. RockStar – RockStar is another example like MGM that proved hackers don’t need expensive equipment to breach insecure systems, with this breach being conducted using a cellphone, a hotel room TV and an Amazon FireStick.
    5. The City of Oakland – An entire city was the target of a hack that occurred in February of 2023, the sustained attack which lasted more than a week prompted the city to even declare a state of emergency while systems remained offline. Class actions lawsuits were also filed in the aftermath of the attack in this case.

    These are just five attacks that made major news last year, but there were thousands more that did not make major news. When an attack occurs on a small business many times it leaves the owners with no choice but to close up shop (60% of small businesses that are the victim of a cyber attack close within 6 months).

    As IT providers it’s a frustrating topic for us as so much of this is preventable. If more preventions were put in place and it was more difficult for attackers to realize their goals than it would have a cumulative positive effect overall. As the saying goes, an ounce of prevention is worth a pound of cure. Let us help you meet your cybersecurity goals in 2024 by clicking on the image below.

    Looking for more to read? We suggest these other articles from our site.

    This article was powered by Valley Techlogic, an IT service provider in Atwater, CA. You can find more information at https://www.valleytechlogic.com/ or on Facebook at https://www.facebook.com/valleytechlogic/ . Follow us on Twitter at https://x.com/valleytechlogic.