Search the Community
Showing results for tags 'generative ai'.
-
Google Bard now requests your actual location for better answers
zikalify posted a topic in Front Page News
Google Bard now requests your actual location for better answers by Paul Hill Google has pushed out another update for its generative AI chatbot, Bard. This time, it has added the ability for users to allow access to their location so that Bard can use this and provide more relevant results. One popular Google Search query is “What time does X close?”, you’ll typically see the searched-for business in your Search results with a list of closing times. You can now do this in Bard with the location update and it will give you the closing times of the local stores you asked about. Google didn’t really expand too much on what else the precise location would enable you to do, but Bard itself says you can get location-specific information on places like coffee shops and restaurants, get directions from your current location, find events that are happening near you, and get local weather information. Having tested the weather forecast, Neowin can report that it works well but there is a small grievance. In the UK, where a hodgepodge of imperial and metric measurements are used, the main unit for measuring the temperature is Celsius. Despite knowing that the query was coming from the UK, Bard still decided to put out its response in Fahrenheit, a quick clarification swiftly resolves this though. To see which location Google Bard has for you, just look in the bottom-left corner and you should see a blue dot if you’ve given permission for it to use your location, followed by your town or city. You can also press update location if it’s now out of date due to travelling. OpenAI took an early lead in the generative AI race but while it still gets updates, its knowledge is still stuck in 2021 and it cannot do as much now as Bard, such as accessing your location or grabbing relevant pictures from the web and inserting them into your query results (at least on the free tier). We are still early on with regards to the maturity of these generative AI projects so we should see a lot more new features arriving over time. It’s so early that Google still refers to Bard as an experiment. -
Microsoft has signed deal with CoreWeave for AI computing power, say sources
zikalify posted a topic in Front Page News
Microsoft has signed deal with CoreWeave for AI computing power, say sources by Paul Hill Microsoft has signed a deal with CoreWeave, a provider of AI computing power, that could be worth billions of dollars over a number of years. The news was disclosed by sources familiar with the matter, to CNBC. While Microsoft and CoreWeave have failed to confirm the information, the sources told CNBC that the deal was made to ensure that OpenAI’s ChatGPT had enough computing power going forward. Through a partnership with Microsoft, OpenAI currently uses Microsoft Azure infrastructure to run ChatGPT, which is resource intensive. It seems that the agreement was made earlier this year. In recent weeks, we’ve seen the price of NVIDIA’s shares rocket up as investors anticipate higher earnings for the company on the back of generative AI services like ChatGPT. CoreWeave offers cloud computing services also powered by NVIDIA hardware. The revelation about this deal comes just one day after CoreWeave announced that it had secured $200 million in a Series B funding extension, bringing the round’s total to $421 million. The $200 million was invested by Magnetar Capital. According to CoreWeave, the funding is helping it fill a gap in the market that legacy cloud computing providers are struggling to fill. “By combining easy access to high-powered GPUs for training AI models with fast and flexible infrastructure and by focusing on a specific type of compute, CoreWeave continues to differentiate itself from other companies in the space,” said Ernie Rogers, Magnetar’s chief operating officer. “Magnetar believes CoreWeave sits in a sweet spot for enabling world-class results across a number of industries. We are proud to have been the lead investor for CoreWeave’s Series B funding round and its extension.” Now that knowledge of this agreement is public, it could add even more fuel to NVIDIA’s stock price as it suggests the company could see even more demand for its products as CoreWeave seeks to provide resources for Microsoft and OpenAI. Source: CNBC -
Check out the classic Windows XP and Windows 11 wallpapers with generative AI fill effects
John Callaham posted a topic in Front Page News
Check out the classic Windows XP and Windows 11 wallpapers with generative AI fill effects by John Callaham Generative AI is being used for all sorts of tasks right now. That includes creating artwork with just some text prompts. However, it's also used to help create wallpapers. Today, Microsoft's Michael Gillett has uploaded new versions of well-known Windows wallpapers that were enhanced with generative AI. Gillett's day job is at Microsoft where he is a Partner Technology Strategy Manager. However, he also runs Wallpaperhub.app, which collects and stores Microsoft-themed wallpapers. Today, via a post on Twitter, he announced he has uploaded versions of the classic Windows XP "Bliss" default wallpaper, and the newer but still cool Windows 11 "Bloom" wallpaper. It's time to use AI for wallpapers! Here are the default #Windows11 Bloom wallpaper expanded to show what's beneath and the classic #WindowsXP Bliss with more landscape visible 😍 Download Generative Bloom: https://t.co/gy72oGzcil Download Generative XP: https://t.co/gtnw8m9vrh pic.twitter.com/58cRJQRVF0 — Michael Gillett (@MichaelGillett) May 31, 2023 The new wallpapers used generative AI fill to show what's beneath both of the original versions. While they certainly look different, the new versions definitely look like natural extensions of Microsoft's creations. In the case of the Windows XP wallpaper, generative AI fill created a lake below the green landscape. You can download "Generative Bloom" and Generative XP" at the Wallpaperhub.app site now. Some users have already asked the AI art creator Midjourney to generate some "Windows 12" wallpapers that actually look pretty good. Microsoft is currently working on using AI to create new wallpaper visual effects. The first reports on this feature hit the internet earlier this month, as some users found some programming strings in a Windows 11 Canary channel build that were labeled as "Depth effects," "Parallax Background," and "WallpaperMotion." Later in May, Twitter user Albacore posted a short video on Twitter that showed off these new parallax effects on a Windows 11 wallpaper. Microsoft didn't mention this new feature last week at its Build 2023 developer conference, but hopefully, we will be learning more about these kinds of features officially in the near future.- 7 replies
-
- microsoft
- windows 11
- (and 5 more)
-
Nvidia becomes the first chipmaker to hit $1 Trillion market cap, thanks to the AI boom
anmol112 posted a topic in Front Page News
Nvidia becomes the first chipmaker to hit $1 Trillion market cap, thanks to the AI boom by Mehrotra A Nvidia has just breached the $1 Trillion market capitalization and became the first chipmaker to join the elite club in the process, thanks to the rise in generative AI. Last week, Nvidia added close to $200 Billion in a day after the mid-week quarterly report for Q1 2023 where Jensen Huang, CEO of Nvidia noted that the company is reaping the benefits of being the first one to invest in various AI technologies. Today, the company's stock price opened at $405 and hit a high of $419, making Nvidia the fourth company after Apple, Microsoft, Alphabet and Amazon to join the Trillion dollar club. As of writing the article, Nvidia's market capitalization sat at $1.01 Trillion, up from $900 Billion last week. Nvidia has returned close to 30 percent in the last week, which is higher than the Nasdaq returns for the same period. Over the last year, Nvidia has doubled investors' money. The investments in Artificial Intelligence (AI) made the company even more lucrative to investors. The California-based chipmaker had a busy last couple of weeks where it doubled down on AI with announcements at Microsoft Build 2023 as well as at Computex 2023. These included ACE, a Chat GPT-like model for NPCs in games, a collaboration with WPP to introduce an innovative content engine, new AI enhancements for RTX GPUs, a new Ethernet switch focused on AI applications and a partnership with MediaTek to bring AI to cars. While the company has been riding the AI wave, there has been skepticism surrounding the company's long-term growth and if it can sustain the current momentum. Nvidia saw similar jumps in stock prices during the crypto mining boom in 2020 which resulted in the GPU shortage. However, it didn't last long as the stock fell by more than 40% in 2022.- 10 replies
-
Nvidia revolutionizes content creation using generative AI with ad giant
Omer Dursun posted a topic in Front Page News
Nvidia revolutionizes content creation using generative AI with ad giant by Omer Dursun Nvidia has joined forces with WPP to develop an innovative content engine that leverages Omniverse and AI capabilities. This collaboration aims to empower creative teams to produce high-quality commercial content more efficiently. The brand-new content engine connects a diverse ecosystem of 3D design, manufacturing, and creative supply chain tools, including renowned platforms like Adobe and Getty Images. By integrating 3D content creation with generative AI, WPP's artists, and designers can create compelling visuals and experiences, such as 3D product configurators. During his keynote address at COMPUTEX, Nvidia CEO Jensen Huang unveiled a demo showcasing the potential of this engine. Huang highlighted how WPP's clients can collaborate with their teams to produce large volumes of brand advertising content, ranging from images and videos to immersive experiences. Mark Read, CEO of WPP, emphasized the transformative power of generative AI in marketing and praised the partnership with Nvidia. At the core of this content engine lies Omniverse Cloud, a platform that enables connectivity between 3D tools, facilitates the development of industrial digitalization applications, and supports the operation of these tools. This integration allows WPP's designers to create brand-accurate, photorealistic digital representations (digital twins) of client products. Designers can use AI models like Adobe Firefly and NVIDIA Picasso to generate high-fidelity images based on text prompts and incorporate them into scenes. The final scenes can be rendered as brand-accurate, 2D images and videos for traditional advertising purposes or published as interactive 3D product configurators on the NVIDIA Graphics Delivery Network. In addition to significantly enhancing speed and efficiency, the new content engine outperforms existing methods that rely on manual content creation and disconnected data sources from various tools and systems. The new content engine will soon be exclusively available to WPP's clients worldwide. It will provide them with a powerful AI-driven solution that redefines content creation, personalization, and brand experiences in digital advertising. -
NVIDIA and MediaTek team up to bring AI to all types of cars
zikalify posted a topic in Front Page News
NVIDIA and MediaTek team up to bring AI to all types of cars by Paul Hill NVIDIA has announced that it’s partnering with MediaTek to develop systems-on-chips (SoCs) for vehicles that will provide infotainment services. Under the plans, MediaTek will develop the SoCs using an NVIDIA GPU chiplet that runs the NVIDIA DRIVE platform. According to NVIDIA CEO Jensen Huang, the move won’t only benefit owners of premium vehicles. “The combination of MediaTek’s industry-leading system-on-chip plus NVIDIA’s GPU and AI software technologies will enable new user experiences, enhanced safety and new connected services for all vehicle segments, from luxury to entry-level,” Huang said. The NVIDIA-powered chip will help to deliver a performance-enhanced Dimensity Auto platform for customers. Dimensity Auto is MediaTek’s in-vehicle solution that brings entertainment and cockpit features to equipped vehicles. “NVIDIA is a world-renowned pioneer and industry leader in AI and computing. With this partnership, our collaborative vision is to provide a global one-stop shop for the automotive industry, designing the next generation of intelligent, always-connected vehicles,” said MediaTek CEO Rick Tsai. “Through this special collaboration with NVIDIA, we will together be able to offer a truly unique platform for the compute-intensive, software-defined vehicle of the future.” Last week, NVIDIA’s stock price rocketed up by $80 to around $380 per share on the back of demand it saw for AI chips to power things like ChatGPT. Whether that stock price is actually justified is a whole other matter. The stock market is closed today so we won’t be able to see what impact today’s announcements will have on the price, but it will no doubt see some uptick. The recent NVIDIA rally has been brought about thanks to generative AI and the technology is also significantly helping other companies like Microsoft and Google. With higher interest rates, the stock market has been taking a hammering as people reduce their spending but generative AI is seemingly helping tech firms buck the trend. -
NVIDIA shares recently hit $380 each, but is the price fair?
zikalify posted a topic in Front Page News
NVIDIA shares recently hit $380 each, but is the price fair? by Paul Hill NVIDIA has been making the news in recent days after it released its first-quarter earnings. It saw a 14% increase in revenue year-over-year in its data centre business thanks to the explosion of generative AI, which its hardware largely powers. The news propelled NVIDIA’s stock price by about $80 in one day and the company’s total market cap almost surpassed $1 trillion, which, if it had passed, would have put it in a very special club with Apple, Microsoft, Saudi Aramco, Alphabet, and Amazon. In this editorial, I want to run NVIDIA’s financials through two valuation methods I like for approximating a company’s fair or intrinsic value. It should be noted that they are fairly conservative valuation techniques but we can do a quick trick to help account for all the hype surrounding NVIDIA, and the subsequent demand that makes the price of the stock that much higher. It should be said that nothing I outline below constitutes investment advice, furthermore, I do not own NVIDIA stock nor am I shorting it. Both the valuation methods, the Graham Number and Book Value Per Share are well-known methods for valuing companies’ share prices, and both are quite conservative. Do not buy or sell NVIDIA stock solely on these valuations. Graham Number The first valuation method I want to look at is called the Graham Number. The name may be familiar to you if you’ve ever listened to Warren Buffett talk. It’s named after his mentor Benjamin Graham, a proponent of value investing (where you buy underpriced stocks with the expectation that they’ll go back up to their fair value). The Graham Number formula outputs the most a value investor should pay for a stock. Ideally, a value investor would want to buy the stock for 20% or more below the Graham Number to give themselves a margin of safety. With very popular stocks like Google, Tesla, Meta, and NVIDIA, you’ll often see that the sheer demand for these stocks rockets their price well above the Graham Number so a trick can be used where you multiply the Graham Number by four just to account for all the demand. Obviously, this is a bit precarious for people after value stocks but in the case of NVIDIA and similar stocks, they may never get close to their actual Graham Number, due to demand. So, how do you work out the Graham Number? The formula is: SQRT(22.5 x (earnings per share averaged out over three years) x (book value per share)) If we use the latest data from Yahoo! Finance, we could tap into a calculator: √(22.5×((1.74+3.85+1.73)÷3)×(8.96)) to get the result of $22.17. According to the Graham Number formula, the fair value of an NVIDIA share is $22.17 - compare that to the value it’s trading at currently of $379.80 per share. If you multiply this by four to try and account for hype, you still only reach $88.71. Book Value Per Share Earlier, I referred to the Graham Number and Book Value Per Share as conservative valuation tools. It would be more proper to call the Book Value Per Share ultra-conservative. The Book Value Per Share is how much a shareholder would theoretically be paid out if a business sold all of its assets, paid its debts, and distributed the rest of the money to its shareholders. You can work out the Book Value Per Share by subtracting the company's total liabilities from its total assets and then dividing the answer by the outstanding shares - it’s also readily available on websites like Yahoo! Finance. In the case of NVIDIA, it has a Book Value Per Share of $8.96. The Book Value Per Share is far too conservative for valuing NVIDIA, the demand for the stock pushes it well above this figure. Intel has not been doing well recently in terms of its stock price and it is still hovering above, just slightly, its Book Value Per Share. Bonus: Price/Earnings to Growth Ratio, P/E Ratio, DCF While digging out the numbers for the Graham Number, I happened upon NVIDIA’s Price/Earnings to Growth Ratio. This is similar to the price-to-earning (P/E) ratio but also factors in the future expected growth of a company. The magic number when it comes to the PEG ratio is 1.0, if it’s below this, it can indicate a stock is undervalued and if a stock is higher, it can indicate that it’s overvalued. Using the 5-year expected growth rate, Yahoo! Finance says NVIDIA’s PEG ratio is 3.55, meaning it’s trading at 3.55 times its expected earnings growth rate, again suggesting it's overvalued. If NVIDIA had a PEG ratio of 1.0, it would be trading at $107.04 - this could be taken as a possible fair price. Going back to the P/E ratio mentioned earlier, Yahoo! Finance says that NVIDIA's P/E ratio is at a huge 218.28. For comparison, the S&P 500's long-term average P/E ratio is about 16, which suggests NVIDIA could be overvalued. Aside from the valuation methods used above, there's also a common one you'll see online called Discounted Cash Flow (DCF). This method attempts to work out the fair value today based on future expectations. The trouble with this method, of course, is that nobody can predict the future. Alpha Spread, which offers DCF valuations lists worst, base, and best case valuations which it lists as $44.55, $80.51, and $226.81, respectively. A similar website, GuruFocus suggests a DCF fair value of $59.87. So as you can see, it's a bit all over the place, but well below the current levels. Conclusion Over the long term, NVIDIA and other tech stocks will probably grow as they develop new technologies and increase their portfolio of products and services. In the short and medium term, however, NVIDIA could experience a bit of a pullback so it’s best to be careful if you were thinking of jumping in right now. One of the things that the investor Charlie Munger says is that you should look for great companies at a fair price, as opposed to fair companies at a great price. NVIDIA is a great company, but with the huge price explosion, I don’t think it’s currently trading at a fair price. If you are thinking about investing, you should not only use these valuation methods, they should be used within a more comprehensive system for evaluating stocks. As I've said with other types of investing in the past, if you decide to have a go, only invest what you can afford to lose, or you might end up like this guy. The views, opinions, positions or strategies expressed by the author and those providing comments are theirs alone, and do not necessarily reflect the views, opinions, positions or strategies of Neowin. -
Nvidia gains close to $200B in value, races towards $1 Trillion market cap, thanks to AI
anmol112 posted a topic in Front Page News
Nvidia gains close to $200B in value, races towards $1 Trillion market cap, thanks to AI by Mehrotra A Nvidia has a seen a massive increase in its stock price following the Wednesday’s first quarterly report for the 2023-24 financial year. The jump in the price comes after the company announced its results and claimed that it can feed the demand for AI chips created by the sudden rise in generative AI technologies. At the time of writing, Nvidia's shares sit at $383, up from $308 last night, giving investors a hefty 22 percent return in a day. The speedy rise in the price of Nvidia's share also gives the chip maker a chance to become the fifth publicly traded US company to join the elite $1 Trillion club. With the boom in the share price, Nvidia has added $185 Billion to its market capital, which is more than Intel's total market capitalization. Unfortunately, while Nvidia has been enjoying the moment, Intel has lost more than 5 percent of the share price during the first half of trading. Nvidia is not the only company that is benefiting from the generative AI wave. Taiwanese chipmaker TSMC also saw gains following Nvidia's positive result. Not only that, Nvidia's good news has done wonders for AMD who saw a massive 13 percent jump in the share price following Nvidia's quarterly financial report last night. However, the jump in Nvidia's price had a negative impact on the short sellers. According to a Bloomberg report (Paywall), the spike in Nvidia's stock price has made a $2.3 Billion hole for those betting against the chip marker. This also bought Nvidia's notional short interest value (the amount of money short sellers have bet on Nvidia's decline) to $9 Billion, making it the "fourth most-shorted stock in the US, behind only Apple Inc., Tesla Inc. and Microsoft Corp. " Currently, analyst believe that this is now Nvidia's game to loose. Geoff Blaber, chief executive of CCS Insight, noted the following: We are obviously seeing a huge spike in AI demand and Nvidia is at the very front line of that. They are without doubt in pole position because they provide a very comprehensive toolchain that no other company is able to currently. Nvidia has seen fluctuations in the past as the company went through various technological trends like the boom in crypto mining and autonomous driving. During the investors call, Jensen Huang, CEO of Nvidia noted that the company is reaping the benefits of being the first one to invest in the various AI technologies. When generative AI came along, it triggered a killer app for this computing platform that’s been in preparation for some time. With generative AI becoming the primary workload of most of the world’s data centres generating information, it is very clear now that . . . the budget of a data centre will shift very dramatically towards accelerated computing, and you’re seeing that now. It will be interesting to see if Nvidia can sustain this sudden rise in the share price. As things stand, the company may become the first chip maker to join Apple, Microsoft, Alphabet and Amazon in the Trillion dollar club. Nvidia's market capitalization stands at $985 Billion at the time of writing this article. -
NVIDIA revenues fell 13% in Q1 compared to the year before
zikalify posted a topic in Front Page News
NVIDIA revenues fell 13% in Q1 compared to the year before by Paul Hill NVIDIA has told investors that its first-quarter revenues were down 13% year-over-year to $7.19 billion, but up 19% from the previous quarter. The company is aiming for revenue to reach $11 billion in the second quarter, plus or minus 2% - investors will be keenly watching to see if the firm achieves this goal. The company is a major beneficiary of the generative AI boom. Without its underlying hardware, many of the generative AI tools that are coming online now wouldn’t be possible. “The computer industry is going through two simultaneous transitions — accelerated computing and generative AI,” said Jensen Huang, founder and CEO of NVIDIA. “A trillion dollars of installed global data centre infrastructure will transition from general-purpose to accelerated computing as companies race to apply generative AI into every product, service and business process. “Our entire data centre family of products — H100, Grace CPU, Grace Hopper Superchip, NVLink, Quantum 400 InfiniBand and BlueField-3 DPU — is in production. We are significantly increasing our supply to meet surging demand for them,” he said. On the back of the generative AI explosion, NVIDIA has seen its data centre-related revenue hit a record $4.28 billion. That’s up 14% from a year ago and 18% compared to the previous quarter. On the other hand, its first quarter gaming revenue was at $2.24 billion, down 38% from a year ago but up 22% from the previous quarter. Its Professional Visualization business was down 53% to a revenue of $295 million and its automotive business was up 114% to a record $296 million. For any investors out there holding NVIDIA shares, the company said it will pay a cash dividend of $0.04 per share on June 30, 2023, to all shareholders who are on record as of June 8, 2023. During the first quarter, it paid out $99 million in cash dividends to shareholders. At the end of the trading day yesterday, NVIDIA stock was trading at around $305 per share. In after-hours trading when the first quarter results were announced, the stock price shot up and sits at $380 at the time of writing. -
Google Bard can now bring in images from Search by Paul Hill Google has updated Bard again today, this time it has enabled the ability to bring in images from Search. For example, you can now say to Bard “Show me a dog” or something more advanced like “Show me a red dog chasing a stick”, both of which yield acceptable results. While Google Bard supports English, Japanese, and Korean, the image support only works with English requests right now. If any of the results rendered are interesting to you, and you’d like to follow them up, the source is attached to the image and you can click through to it. According to the company, it has added image support because the medium can help communicate ideas more effectively. It said that “They can bring concepts to life, make recommendations more persuasive and enhance responses when you ask for visual information.” Ever since Google I/O at the start of the month, updates have been coming to Bard on a regular basis. It has added a new PaLM 2 LLM under the hood, an export option, dark mode, and more useful summaries and source information. To be clear, this latest update is just fetching relevant images from Search. Bard cannot yet generate its own images via Adobe Firefly, but that should arrive sometime in the coming months. When you ask for images from Bard, it can display several images separately. After a bit of experimenting with different queries, it seems that it can create a gallery of images with left and right buttons to scroll through the images. It usually provides a small description of the image underneath so you can better understand what you’re looking at. Keeping on top of all the new capabilities of Bard can be a bit tricky. An easy way to see what it can do is by heading to the Updates tab on the left-side menu. There, you’ll find all the major updates to Bard, when they were added, and why they have been added.
-
Adobe adds AI features to Photoshop with new Generative Fill feature
John Callaham posted a topic in Front Page News
Adobe adds AI features to Photoshop with new Generative Fill feature by John Callaham Earlier this year, Adobe announced Firefly, a set of generative AI tools and features for its many Creative Cloud apps. Today, Adobe revealed that one of those Firefly tools, Generative Fill. is being added to the beta version of its popular Photoshop image editing app. In a press release, Adobe stated that Generative Fill for Photoshop was designed to help photo editors add, remove, or extend content in images. It stated: Generative Fill automatically matches perspective, lighting and style of images to enable users achieve astounding results while reducing tedious tasks. Generative Fill expands creative expression and productivity and enhances creative confidence of creators with the use of natural language and concepts to generate digital content in seconds: One thing about Adobe's Firefly platform is that it uses artwork that comes from the company's own Adobe Stock images, along with openly licensed and public domain content. This allows features like Generative Fill to use content without having to deal with copyright restrictions. In addition, Adobe says it follows its own AI Ethics rules for labeling content made with generative AI as such. It states: Generative Fill supports Content Credentials, serving an essential role in ensuring people know whether a piece of content was created by a human, AI-generated or AI-edited. Content Credentials are like “nutrition labels” for digital content and remain associated with content wherever it is used, published or stored, enabling proper attribution and helping consumers make informed decisions about digital content. The Generative Fill feature in Photoshop is available on the desktop beta app today. It's also available on the module within the Firefly beta app. It is expected to be generally available sometime in the second half of 2023. It will join the many art AI creation tools already out there, including Microsoft's Bing Image Creator and Midjourney.- 2 replies
-
- adobe
- adobe photoshop
- (and 6 more)
-
G7 leaders unite to regulate generative AI globally under the 'Hiroshima Process'
Tushar Mehta posted a topic in Front Page News
G7 leaders unite to regulate generative AI globally under the 'Hiroshima Process' by Tushar Mehta Source: G7 Hiroshima Summit 2023 The sudden boom in the popularity of generative artificial intelligence (AI) tools like ChatGPT has compelled tech giants, including Microsoft and Google, to join the race with their chatbots that display human-like conversational skills. But the phenomenon has left lawmakers worldwide grappling with regulating its fair and ethical use. Thus, leaders from the Group of Seven countries recently came together at the G7 Hiroshima Summit 2023 to discuss ways to form global standards under common democratic values. Convening what is being dubbed the "Hiroshima Process," participating governments will initiate cabinet-level talks and report the results at the end of the year, as per Bloomberg. Meanwhile, Japanese Prime Minister Fumio Kishida insisted on a "human-centric" approach toward the development of AI and called for a global and secure exchange of data. Kishida also pledged a financial contribution to the effort to ensure AI is not misused for spreading information or harming humans. The development comes a few weeks after digital and tech ministers from G7 nations unanimously decided to adopt a "risk-based approach" without stifling innovation, as per the official statement. It is followed by Italy's recent temporary ban on ChatGPT and concerns from lawmakers across several countries and regions, including the U.S., Australia, and the EU, about the potential dangers of generative AI. Notably, the European Union, also a "non-enumerated" member of the G7, is already leading the effort to draft an "AI Act," which is set to be the world's first all-encompassing legislation on the use of AI. The proposed AI Act also relies on a risk-based approach and classifies unacceptable, high-risk, limited, and minimal risks based on the implications of various AI applications. Besides popular chatbots such as ChatGPT, the AI Act looks to put an anchor to other AI applications that rely on advanced computing algorithms, such as remote biometric surveillance systems. Similarly, the US government is also working on a model AI Bill of Rights to ensure the safe, private, and accountable use of AI. Despite the speedy progress, generative AI tools have been subject to criticism, not only from governments and legislators but also from technology leaders, including OpenAI's CEO Sam Altman. Earlier this week, Altman testified in front of the U.S. Congress, where they primarily echoed the need to regulate AI and called for forming a government body that licenses AI companies. Participation from G7 countries is definitely expected to accelerate global and unified efforts towards keeping AI safe for users not just in the participant countries but also be used as a model in other democracies around the world.- 22 replies
-
Apple tells employees not to use AI chatbots like ChatGPT over confidential data leak fears
zikalify posted a topic in Front Page News
Apple tells employees not to use AI chatbots like ChatGPT over confidential data leak fears by Paul Hill Apple has told its employees not to use generative AI tools such as ChatGPT and GitHub Copilot according to The Wall Street Journal which got the information from people in the know. Apple is not the only big tech firm to take such actions, Samsung has also banned its employees from using generative AI chatbots. Apple apparently told employees that using these chatbots could cause the accidental release of confidential information. While most people are familiar with ChatGPT and what it does, you may not have used GitHub Copilot. First of all, GitHub is owned by a major Apple competitor, Microsoft. With Copilot, users can automate some of their software development and Apple is concerned that Microsoft could intercept secret Apple code to see what it’s working on or just copy the products. Luckily for Apple employees who want to delegate jobs to AI, Apple is working on its own generative AI product, according to the report. It’s not clear whether Apple employees are able to use this internally yet but as soon as that goes live, there will be no need for them to resort to products like ChatGPT. Apple is due to hold its WWDC developer conference early next month. The company is expected to reveal its mixed-reality headset and it wouldn’t be too much of a surprise if we did see some sort of generative AI at least demoed. It seems like a lot of tech firms have been working on generative AI for a while now and have been quick to launch their own products; Apple could be in the same position. Speaking of WWDC, it was reported a few days ago that Apple could unveil sideloading apps on iOS. This feature has been on Android for a long time, if not from the beginning. It's definitely going to be interesting to see Apple open up iOS a bit more. Source: The Wall Street Journal- 4 replies
-
- apple
- generative ai
-
(and 6 more)
Tagged with:
-
OpenAI CEO Sam Altman tells US Senate panel generative AI needs regulation
John Callaham posted a topic in Front Page News
OpenAI CEO Sam Altman tells US Senate panel generative AI needs regulation by John Callaham OpenAI founder and CEO Sam Altman found himself in Washington DC, addressing a US Senate panel on the rise of generative AI and its possible effects on many different industries. Altman told the panel that such AI systems, like his company's own ChatGPT, need to be regulated by the government. Reuters reports that Altman believes the government should require some kind of licensing and testing requirements for AI systems. He also stated that the use of AI to interfere with upcoming elections by users who may create realistic-looking false images or video is a "significant area of concern." That was brought home during the start of the hearing by US Senator Richard Blumenthal. Bloomberg posted a video of Blumenthal who played a recording of his voice talking about the effects of AI. However, he revealed that the recording was actually made by an AI that replicated his voice that was trained via his previous US Senate speeches. Even that recording's remarks were written by an AI program. CNN reports that after that recording was made, Blumenthal said that AI replicated voice could have also created false "endorsement of Ukraine’s surrendering or Vladimir Putin’s leadership." Altman also said at the panel that businesses should have the right to ban their content from being used by AI training models. He also stated that he preferred AI use a subscription model, like OpenAI's own ChatGPT Plus, rather than an ad-based model, which is being used by Microsoft's Bing Chat. -
Google Bard now gives better summaries and more useful source info
zikalify posted a topic in Front Page News
Google Bard now gives better summaries and more useful source info by Paul Hill It has been five days since Google opened Bard up to most of the world and added Japanese and Korean but Google has already pushed out another update. This time, the company is promising better summary info to help you get the gist of a topic quickly and more helpful sources. One of the issues with Bard before today was that it would output its lengthy answer and included sources at the bottom. Unfortunately, you didn’t know which parts of the answer were from the sources; this changes today. In the latest update, Google is now putting numbers alongside the response to show which parts are taken from the source links at the end of the answer. With regards to better summaries, Google said it wants to help you get the gist of an answer more quickly but that it won’t always get it right. If Bard’s response seems weird or is just plain wrong, don’t hesitate to give the response a thumbs down so that Google can have a look. If you’d like to check back through the latest Bard updates, just open up Bard and press the Updates option on the left-hand side. It will open a new tab with dated release notes. -
Amazon's online store could be turbo-charged with generative AI features
zikalify posted a topic in Front Page News
Amazon's online store could be turbo-charged with generative AI features by Paul Hill Last week at Google I/O, the search giant showed off how it was integrating generative AI directly into search. It turns out that Amazon could be doing something similar according to a previous job listing which was spotted by Bloomberg. The job ad said that Amazon Search would help you find products based on questions, show comparisons, offer personalized results, and more. Amazon is expecting the development of its new search feature to be so monumental that it described it as a “once in a generation transformation for Search, just like the Mosaic browser made the Internet easier to engage with three decades ago.” The requirements for the job are quite high and there are additional preferred qualifications but the pay is good ranging from $136,000 to $260,000 per year depending on US geographic location. The qualifications for the job, which is no longer available, were as follows: BASIC QUALIFICATIONS 3+ years of building machine learning models for business application experience PhD, or Master's degree and 6+ years of applied research experience Knowledge of programming languages such as C/C++, Python, Java or Perl Experience programming in Java, C++, Python or related language Experience with neural deep learning methods and machine learning PREFERRED QUALIFICATIONS Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. Another job ad, which is still up at the time of writing seeks a Senior Technical Program Manager. The job ad states: “We are working on a new AI-first initiative to re-architect and reinvent the way we do search through the use of extremely large scale next-generation deep learning techniques. Our goal is to make step function improvements in the use of advanced Machine Learning (ML) on very large scale datasets, specifically through the use of aggressive systems engineering and hardware accelerators.” Amazon wouldn’t comment to Bloomberg on the job listings but a spokesperson said that the company is investing in generative AI across all of its businesses. If Amazon does manage to implement this new search tool into its website, it will significantly increase the connections between customers and sellers and discovery will be far easier. With that being said, it’ll be interesting to see if some sellers are negatively impacted as customers may not spend as long browsing and won’t come across as many products that they weren’t necessarily looking for. Source: Amazon (1, 2) via Bloomberg- 1 reply
-
- amazon
- amazon search
-
(and 5 more)
Tagged with:
-
Google is "supercharging" its classic Search platform with AI for info, shopping and more
John Callaham posted a topic in Front Page News
Google is "supercharging" its classic Search platform with AI for info, shopping and more by John Callaham During the Google I/O 2023 keynote, the company announced plans to add generative AI features to its core Search platform. This will be different from using the Bard chatbot AI which is now available without a waitlist. Google's blog post states that it will test these new features in Search Labs, before they roll out to all users. Using generative AI will allow users to type in more complete search prompts. It stated: Let’s take a question like “what's better for a family with kids under 3 and a dog, bryce canyon or arches.” Normally, you might break this one question down into smaller ones, sort through the vast information available, and start to piece things together yourself. With generative AI, Search can do some of that heavy lifting for you. You’ll see an AI-powered snapshot of key information to consider, with links to dig deeper. The new AI features will also include more details when shopping for an item. Google states: When searching for a product, you’ll get a snapshot of noteworthy factors to consider and products that fit the bill. You’ll also get product descriptions that include relevant, up-to-date reviews, ratings, prices and product images. In another blog post, Google also talked about a new Perspectives feature for Search that is designed to help people learn about something from the thoughts of others. It stated: Let’s say you’re moving across the country, and you don’t know anyone who lives there yet. You search for “how to make friends in a new city,” and tap the Perspectives filter, which shows you a page of results with advice from other people, like personal stories told through video, or tips from commenters in a forum thread. Stay tuned as we report more news from Google I/O today. -
Google officially unveils PaLM 2 LLM that will power next-gen Google Services
anmol112 posted a topic in Front Page News
Google officially unveils PaLM 2 LLM that will power next-gen Google Services by Mehrotra A At the Google I/O 2023, Google announced a host of new features and services that will take advantage of generative AI. Along with it, the company also unveiled its latest iteration of Large Language Model. Called PaLM 2, the new LLM is built upon the foundation of the existing model and is already powering 25 Google services including the company's chatbot Bard. At the event, Google shared the improvements that PaLM 2 brings to the table. These include improved multilingual capabilities and is trained in over 100 languages. Google further claims that its LLM can pass "advanced language proficiency exams at the “mastery” level." Moreover, PaLM 2 is also trained on a variety of scientific papers, journals and websites giving it great research and mathematics capabilities. Lastly, the model has also been trained on a large amount of codebase making it proficient in not only modern languages like Python and JavaScript but also languages like Prolog, Fortran and Verilog. Google has already deployed PaLM 2 within 25 of its services which includes productivity tools like Google Docs, Sheets and Gmail. Furthermore, the model is also powering specialized Google services like Med-PaLM 2 which has achieved "state-of-the-art results in medical competency, and was the first large language model to perform at “expert” level on U.S. Medical Licensing Exam-style questions." Other specialized examples include Sec-PaLM which used PaLM 2 and Google Cloud to scan malicious scripts, cyber threats and more. At the event, Google stated: Even as PaLM 2 is more capable, it’s also faster and more efficient than previous models — and it comes in a variety of sizes, which makes it easy to deploy for a wide range of use cases. We’ll be making PaLM 2 available in four sizes from smallest to largest: Gecko, Otter, Bison and Unicorn. Gecko is so lightweight that it can work on mobile devices and is fast enough for great interactive applications on-device, even when offline. This versatility means PaLM 2 can be fine-tuned to support entire classes of products in more ways, to help more people. At last, Google also teased Gemini, a new multimodal that is highly efficient at tool and API integrations. Google notes that Gemini is currently getting trained and once finely tuned, it will be available "at various sizes and capabilities, just like PaLM 2, to ensure it can be deployed across different products, applications, and devices for everyone’s benefit."- 1 reply
-
- google palm 2
-
(and 7 more)
Tagged with:
-
Microsoft 365 Copilot Early Access Program announced for 600 business customers
John Callaham posted a topic in Front Page News
Microsoft 365 Copilot Early Access Program announced for 600 business customers by John Callaham In March, Microsoft 365 Copilot was first announced. This was the company's reveal of its generative AI features in its Office productivity apps like Word, Excel, PowerPoint, and more. At the time, Microsoft launched a testing program with 20 of its enterprise customers. During this period, the company has received feedback from these customers about Copilot's use by their employees. Microsoft stated: Their overwhelming feedback is that Copilot has the potential to revolutionize work. They point to how it is a game changer for meetings and is beginning to transform the way they create. And, they’ve identified areas where we can do more to help people adapt to this new way of working, like the need for more conversational, multi-turn interactions. Today, Microsoft has announced a new program that will expand the testing of this feature to many more users. It's called the Microsoft 365 Copilot Early Access Program, and it will initially be available as a paid preview to 600 more customers. There's no word on how much Microsoft is charging for this paid preview. However, the program will almost certainly generate a lot more information on how customers will use its generative AI features. Since the March announcement of Microsoft 365 Copilot, the company has revealed even more productivity app support for AI features using the Copilot branding. That includes support for OneNote, Viva, and SharePoint. Today, Microsoft announced even more apps will be adding Copilot AI features: Copilot in Whiteboard will make Microsoft Teams meetings and brainstorms more creative and effective. Using natural language, you can ask Copilot to generate ideas, organize ideas into themes, create designs that bring ideas to life and summarize whiteboard content. By integrating DALL-E, OpenAI’s image generator, into PowerPoint, users will be able to ask Copilot to create custom images to support their content. Copilot in Outlook will offer coaching tips and suggestions on clarity, sentiment and tone to help users write more effective emails and communicate more confidently. Copilot in OneNote will use prompts to draft plans, generate ideas, create lists and organize information to help customers find what they need easily. Copilot in Loop helps your team stay in sync by quickly summarizing all the content on your Loop page to keep everyone aligned and able to collaborate effectively. Copilot in Viva Learning will use a natural language chat interface to help users create a personalized learning journey including designing upskilling paths, discovering relevant learning resources and scheduling time for assigned trainings. In addition, all Microsoft 365 E3 and E5 customers will get access to Semantic Index for Copilot. Microsoft says this feature will create a map of a company's user and company data they can search through, even if they don't use Microsoft 365 Copilot. It stated: For example, when you ask it about the “March Sales Report,” it doesn’t simply look for documents with those words in the file name or body. Instead, it understands that “sales reports are produced by Kelly on the finance team and created in Excel.” And it uses that conceptual understanding to determine your intent and help you find what you need. The clear intent is that Microsoft wants more feedback about its Copilot features, but it also doesn't want to unleash it in an open public preview yet. This slow rollout with an Early Access program is a good compromise so it can learn how it is being used by businesses to help its employees, rather than a tool to replace workers.-
- microsoft
- microsoft 365
- (and 4 more)
-
Microsoft's Mikhail Parakhin says Bing Chat should get model updates three times a year
zikalify posted a topic in Front Page News
Microsoft's Mikhail Parakhin says Bing Chat should get model updates three times a year by Paul Hill Mikhail Parakhin, Microsoft’s head of Advertising and Web Services, has suggested that Bing Chat could receive new model updates about three times per year. Model updates tend to bring new features to these generative AI chatbots. Such updates in the past have introduced better formatting for answers in Bing Chat’s Creative mode. To be clear, nothing is codified in a roadmap or anything at this point, Parakhin was just responding with a rough estimate. He said “you should expect models to be updated maybe three times a year or so,” suggesting that it’s a rough guess at this point and is subject to change at any time. Models themselves it takes months to train, so, outside of some small RLHF tuning runs, you should expect models to be updated maybe 3 times a year or so. — Mikhail Parakhin (@MParakhin) May 8, 2023 In addition to major updates three times a year, he said there will be some small reinforcement learning from human feedback (RLHF) "tuning runs" that could improve the responses of Bing Chat. Since ChatGPT came onto the scene at the end of last year, OpenAI, Google, and Microsoft have all been issuing updates to their respective generative AI products. These incremental updates will be quite an important factor going forward as users will likely gravitate to the service that is most capable. Via: Search Engine Roundtable -
OpenAI no longer uses API customer data to train its LLMs
zikalify posted a topic in Front Page News
OpenAI no longer uses API customer data to train its LLMs by Paul Hill Sam Altman, the CEO of OpenAI, has confirmed to CNBC that the company no longer uses API customer data to train its large language models. OpenAI updated its Terms of Service to reflect this at the start of March but didn’t make a song and dance about it. If you use ChatGPT directly, this data will still be used for training, unless you go incognito. In an interview, Sam Altman told CNBC that customers “clearly want us not to train on their data, so we’ve changed our plans: We will not do that.” Unfortunately for those using ChatGPT directly, this is not the case by default. The collection of data is such an issue Samsung has banned employees from using chatbots like ChatGPT over security leaks. As an entirely new category of software, companies like OpenAI as well as wider society are still getting to grips with the best practices. Earlier today, Neowin reported on the fact that the Competition and Markets Authority was going to start investigating how these generative AI products could affect competition and consumers. Another way in which these bots have had to retrospectively be improved upon is in relation to guard rails. Since their launch, they’ve been adapted a little bit to ensure they don’t say offensive things. When users try to get the bots to say something offensive, the bots recall pre-written scripts letting the user know they can’t help with that request. Source: CNBC -
UK competition watchdog launches review into generative AI
zikalify posted a topic in Front Page News
UK competition watchdog launches review into generative AI by Paul Hill The UK’s Competition and Markets Authority (CMA), which recently blocked Microsoft’s purchase of Activision Blizzard, is now launching an initial review of “artificial intelligence models” such as those that power Bing Chat and ChatGPT. The CMA launched the review after the government asked regulators to look into how AI will affect consumers, businesses, and the UK economy. Through the initial review, the CMA wants to look at three things in particular. It wants to see how these AI models could evolve. It wants to look into the opportunities and risks for competition and consumer protection. Finally, it wants to set out principles to support competition and protect consumers. Various regulators in the UK will be looking into how AI affects their target area. AI touches on several important issues such as safety, security, copyright, privacy, and human rights. Take for example the replacement of writers by AI, well it’s not so simple a matter of asking the AI to spit out an article for you. In terms of copyright, the output generated by the AI actually belongs to the company that makes the AI. While Google or OpenAI won’t come after you for cheating on your homework, they very well may come after people using outputs for commercial purposes. All of these various issues will be investigated by the UK’s different regulators but the CMA will focus more narrowly on the implication AI has for competition and consumer protection. “AI has burst into the public consciousness over the past few months but has been on our radar for some time. It’s a technology developing at speed and has the potential to transform the way businesses compete as well as drive substantial economic growth,” said Sarah Cardell, Chief Executive of the CMA. “It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information. Our goal is to help this new, rapidly scaling technology develop in ways that ensure open, competitive markets and effective consumer protection.” The CMA is now seeking evidence from stakeholders (basically, anyone who may be affected) until June 2. After it has collected these insights and done its own analysis, it will publish a report with its findings in September. If you’d like to find out more or track the development of this work, head over to the initial review webpage. -
Microsoft's LinkedIn is using AI to get hiring managers to notice job seekers
John Callaham posted a topic in Front Page News
Microsoft's LinkedIn is using AI to get hiring managers to notice job seekers by John Callaham Microsoft's LinkedIn business social network has already announced it is using generative AI to help its users improve what's on their profiles, along with businesses who want to make better job postings. Today, LinkedIn announced a new feature that is supposed to help job seekers create a message to hiring managers on the service. Engadget reports that if a user sees any open jobs on a business's page, they will see a "Let AI draft a message to the hiring team" option. When clicked, the site will write out a message, based on the job seeker's LinkedIn profile, the hiring manager's profile, the business's description, and the details on the open job position. Of course, after the AI creates the message, the user can edit it to make it look more personal, along with making sure that the letter doesn't contain any errors. The new feature is only for people who sign up for the paid LinkedIn Premium subscription and should start rolling out this week. This new feature definitely sounds a lot like the generative AI features that are being put into Microsoft 365 Copilot, which will write emails, documents, spreadsheets, and more from scratch based on some text prompts. Microsoft is currently testing Copilot with a few businesses but will expand its reach to more companies in the coming months.- 5 replies
-
- microsoft
- microsoft linkedin
-
(and 3 more)
Tagged with:
-
Box announces AI integration into its products in partnership with OpenAI
Karthik Mudaliar posted a topic in Front Page News
Box announces AI integration into its products in partnership with OpenAI by Karthik Mudaliar Cloud storage company Box has announced that it is introducing new artificial intelligence (AI) features across its products in partnership with OpenAI. The features will include analyzing information across customer contracts, summarizing financial documents, surface insights from surveys, and more. Aaron Levie, co-founder and CEO of Box, said: “We are at the start of a platform shift in enterprise software driven by recent advancements in generative AI, and nowhere is the potential impact greater than in enterprise content. We’ve seen a step function improvement in our ability to analyze and synthesize the massive amounts of data contained within an organization’s unique documents, videos, presentations, spreadsheets, and more. When combined with AI, we will be able to unlock the value of this content and make every person in a company smarter and more productive. Content is an organization’s most important data, and with Box AI we’re just getting started with how we’ll transform the way work gets done.” Although there's not a separate product yet, many companies are now jumping on the AI bandwagon to refine their existing functionalities. Box is also doing the same with what it calls Box AI. The company is currently focusing on a few use cases that we have already seen possible with generative AI. For starters, by clicking on the Box AI button, users can ask questions such as "Summarize this document for me", or "When does the NDA expire?" which could come in handy for complex and lengthy documents or contracts. Here are some more use cases that Box mentioned in their press release: Sales teams will be able to use Box AI to get answers to questions in complex contracts to speed up the sales cycle. Analysts will be able to have Box AI summarize lengthy financial reports to inform their rating recommendations. Legal teams will be able to ask Box AI to identify key clauses, terms, and obligations from a contract to speed up review cycles. Operations teams will be able to tell Box AI to extract key takeaways from a budget to update corporate strategy decks without waiting on a co-worker from the finance team for the right piece of information. Customer service teams will be able to use Box AI to surface insights from hundreds of customer feedback surveys to identify key areas for improvement. Box says that it caters to more than 115,000 customers and bringing foundational AI models to where their content is already securely stored will be more useful and valuable. For now, Box AI will only be available to select customers through its upcoming "Design Partner Program". Interested users can also sign up for private beta access when it becomes available. Pricing and other details will be announced later upon general availability. -
Samsung is banning its employees from using chatbots like ChatGPT due to security leak
John Callaham posted a topic in Front Page News
Samsung is banning its employees from using chatbots like ChatGPT due to security leak by John Callaham More companies are cracking down against the use of generative AI chatbots like ChatGPT, Microsoft's Bing Chat, and Google's Bard. The latest business to ban the use of these kinds of chatbots is Samsung. Bloomberg reports that the company sent out a memo to all employees last week, barring them from using chatbots at work or on devices they use for work. According to the report, Samsung discovered that some of its workers had uploaded secret company source code to ChatGPT. There's no word on exactly what kind of data was leaked. While Samsung employees are allowed to use chatbots on devices they personally own, and they can use them outside of work, the company's memo did ask its workers to not submit any company info to chatbots, nor upload any personal info that could result in a leak of Samsung's intellectual property. The memo added: We ask that you diligently adhere to our security guideline and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment. The report says that Samsung is developing its own AI software that will do some of the work that chatbots can do, such as summarizing reports, writing software, and translation. This latest development by a major tech company to ban the use of chatbots at work shows that more and more businesses are becoming concerned about how generative AI could lead to more security issues. This week, Microsoft outlined how it will implement responsible AI practices in its products.- 8 replies
-
- samsung
- generative ai
-
(and 5 more)
Tagged with: