No products in the cart.

Introducing V-Ray 6 for 3ds Max

Go anywhere your creative instincts take you and build anything imaginable — faster than ever before. 

Create faster and render better than ever before. Populate your scenes with millions of 3D objects to create natural-looking landscapes and realistic environments quickly and easily. Generate detailed surfaces using repeating geometric patterns at no extra memory cost. Create beautiful custom skies with procedural clouds, collaborate in the Cloud, and much more.

Pricing and Licensing 

To simplify the V-Ray product line and to offer customers more value and flexibility, Chaos will no longer sell V-Ray for specific host applications. Instead, customers will only need to purchase a single V-Ray license to access all V-Ray integrations, including 3ds Max, Cinema 4D, Houdini, Maya, Nuke, Revit, Rhino, SketchUp, and Unreal.

V-Ray will also be offered in three tiers:  Solo, Premium, and Enterprise.

Chaos V-Ray Solo


For customers looking for the most cost-effective way to use V-Ray on a single computer, Solo is a great option. This node-locked V-Ray license can be used to set up and render on a single machine using any supported host integration. Access to 3D content on Chaos Cosmos is also included. V-Ray Solo is an excellent choice for those looking to save money without compromising on quality or features. Solo node-locked licenses will be available as monthly, annual, and 3-year subscriptions.

Chaos V-Ray Premium


The Premium tier is the best option for most customers and is comparable to the previous V-Ray Collection offering. With this tier, customers can install V-Ray on any machine and use it with any supported host application. Premium also includes a floating license for Chaos companion products, such as Phoenix, Player, Scans, and Vantage. In addition, it includes a starting pack of 20 free credits for Chaos Cloud rendering and access to the growing Chaos Cosmos content library. Premium floating licenses will be available as monthly, annual, and 3-year subscriptions.

Chaos V-Ray Enterprise


Enterprise is the most cost-effective option for studios and firms that need 5 or more licenses of V-Ray. Customers get all of the benefits of the Premium tier, but priced at a discount for multiple seats. Enterprise floating licenses will be available as annual and 3-year subscriptions. New license sales will be subscription only Subscriptions are an easy and flexible way for customers to stay current with all updates and new features of V-Ray. Monthly, annual, and 3-year subscription options are available for Solo and Premium tiers, providing an affordable option to start using V-Ray right away while benefiting from further discounted pricing for longer subscription plans. Enterprise plans do not include a monthly option and are priced for annual and 3-year terms.

There will be no perpetual license options for new customers.

 SoloPremiumEnterprise
License TypeNode-locked licenseFloating licenseFloating license
Min. # of licenses115
Support includedYesYesYes
    
Included products:   
Chaos V-Ray all*
Chaos Cosmos
Chaos Cloud** 
Chaos Phoenix 
Chaos Vantage 
Chaos Player 
Chaos Scans 
*Supported V-Ray integrations include 3ds Max, Cinema 4D, Houdini, Maya, Nuke, Revit, Rhino, SketchUp, and Unreal.
**Free starting pack of 20-credits.

Buy V-Ray 6 for 3ds Max

To purchase visit the product page here.


Get in touch

Call us on +353 (0)1 526 6717 or email sales@bluegfx.eu

Volumetric video – A Bootcamp

So, what is volumetric video?

It’s a technique to record a live video performance and play it back from any angle. Cameras capture multiple vantage points simultaneously, and post-processing converts all the video captures into an animated volumetric dataset and 3d model. This 3d representation can be imported into any 3d environment extending the creative potential to playback and manipulate this animated data. It brings a new dimension to working with video over a flat 2d video taken from a single camera’s perspective

But why? We can only see from one point of view.

Volumetric video data offers another level of control and flexibility. Data can be embedded into immersive real-time virtual or augmented reality applications or used as part of video productions. Volumetric video adds an engaging advantage to creative processes. Viewing 3d video on the fly while it’s playing back from any angle is a powerful proposition for creative users. You can manipulate the lighting and colour, and, with a little interpolation, AI can help clean or smooth volumetric data and enhance the overall result.

Why would I need to do that?

Although volumetric video is in its infancy, volumetric data is not a new concept. Many consumer devices have been able to record and track video playback via cameras and infrared devices in applications to interpret gesture control or scan objects and faces for pseudo-3d or, when done professionally, highly detailed 3d models and complete environmental renditions. For example, the consumer photogrammetry approach typically uses a camera or video to capture an object placed centrally, and pictures are taken from multiple angles. Take enough photographs of the subject a detailed 3d model will materialise. It is a common technique that has been available for decades. Volumetric video capture records a complete performance, and there are many use cases.

Imagine being at a sports game or a music festival as a spectator. You can replay the event and watch it from any position. It’s possible to feel like you are standing on stage or viewing a replay of a memorable sporting victory from any angle. You control where you are standing by not having physical limitations, which you can’t get to see watching television. It takes you closer to the feeling of being there.

In creative industries, you can manipulate an actor’s performance even after a recording has finished or use the data to drive other CG characters.

Volumetric video and the extrapolated 3d data enhance the degree of control over precisely what you want to achieve. It opens up many creative benefits.

The concepts are similar to other virtual production tools but where volumetric video differs is that it captures physical movements and environments, not just stationary-based objects. But both have their uses and complement each other.

When you say perspective…

I mean exactly that. The great thing about volumetric video is that it records the scene in full 3d, so you aren’t even limited to the perspectives of the capturing devices. You can move, using a 3d camera, anywhere in the scene and view it from any angle. Think of a football game, where you want to view the action from a freely movable camera. Clearly, you can have thousands of cameras, covering every available angle and distance. Well, volumetric video builds that scene in 3d, so if you want to watch a glorious goal you can do so from any point of view, up close or taking in the whole pitch (or anything in between).

One more trick of this technology is that you can put yourself in the position of any of the performers, or even a static object in the scene, for an immersive experience.

There are also clear benefits beyond the entertainment field. Imagine a medical setting where a tricky situation can be observed from different perspectives or natural world events that can be preserved digitally. The opportunities are still growing.

Right, so what else?

Think about a trip to your favourite online shop. Imagine being able to choose an outfit and see it from any angle, at your pace and your direction. Some tools already exist that simulate this, as well as placing objects virtually in a space, but volume video takes this to a whole new level. Pick a wardrobe and decorate a space, then share that with whoever you so wish, for feedback or just to show off your style.

Ok, I’m sold. How do I get started?

To start the process, you’ll want a set of cameras, positioned similarly to how motion capture works, with a mesh of cameras surrounding a volume. The difference here is that shape, light, and colour are captured, not just the motion. Because of this, an evenly lit volume might be best but that will depend on the situation.

Next up, the captured footage will need to be processed ready for the consumer, or further edited or integrated into a 3d scene. Tools like Depthkit, HoloCap and Holosuite by Arcturus are ideal for this. The latter also offers non-linear editing tools to ease your projects along.

Once processed and edited, you can begin integrating the volumetric files into augmented or virtual reality projects, deliver online with real-time navigation of your scenes and much more.


Contact Us

To find out more, call us on +353 (0)1 526 6717 or email sales@bluegfx.eu

What’s new in Autodesk Maya 2023 with Graham Bell

Graham explores the latest Autodesk Maya2023​ release, including the new Bifrost USD​ integration, and Arnold updates.

Buy Maya

To purchase visit the product page here.


Get in touch

Call us on +353 (0)1 526 6717 or email sales@bluegfx.eu

The VFX market consolidation effect

The price is right.

Small agile teams are born out of the necessity to develop new tools and applications. The pioneering work in visual effects is often the starting point for innovation. Every few years, new digital content creation tools bridge a productivity gap, reimagine a VFX workflow, or forge ever more efficient ways to work more collaboratively to achieve and deliver ever-increasingly more complex shots.

Almost any application in the VFX space will have had a beginning with a similar story behind them. There is an overwhelming expectation to find commercial validity to support product longevity and increase market presence; otherwise, their competitors will take over. However, when done correctly, positive transformations in productivity and praise for innovation are recognised and rewarded by a loyal user base. Commercial success is practically secured. When enough noise is made, smaller firms are a target for acquisition by larger corporate giants looking to increase their market presence.

Over the last 20 years, the tools used for digital content creation, especially in the world of visual effects, have seen a lot of merging and consolidation due in no small part to acquisition.

Acquisition can happen in various ways and while it may be in the form of some type of capital investment, what is of more obvious concern to us in visual effects and content creation is the acquisition or merging of brands and what impact that has on the end-user experience over time.

For others, the worry is more about any adaptation or evolution of tools they had implemented into their pipeline. Would the new owner develop the tools into something that didn’t fit with what the artist or studio had initially bought them for, or could they be ended entirely?

The answers to these questions are varied and each case has a different outcome and can change over time.

Sometimes, one developer acquires a business or tool to gain access to part of the technology or to embed technology that is simply lacking in their own product offering, which is often the most common reason. Whether that technology is sequestered by another that the new owner already has or if it is run as before but under the new leadership is another matter and can go both ways.

Autodesk continues its recent acquisitions to bolster its industry advancement and competitive edge; recent examples include its acquisition of Tangent Labs and Moxion, facilitating its well-known strategy of becoming a true platform provider. Maxon’s recent purchase of Pixologic is a similar example.

Whichever process, evolution and outcome happens, the key to this from an end-user perspective, is to understand the impact on your own work, your business costs and forward planning.

Open to all

Of course, there is a totally different aspect to the consolidation of VFX technologies and it is certainly a bigger one, with far-reaching impact for everybody involved in this industry and one which has largely positive outcomes for all.

Having technologies that can fit into as many different pipelines as possible is a good thing for studios and artists alike, as it means that assets are more easily utilised in different ways, the talent pool becomes more dynamic, training needs are far fewer and the costs to the business are (hopefully) reduced.

What we are talking about here is the move to cross-platform technologies. By this, I don’t mean which OS a studio might use but the way certain elements of the VFX workflow have been and are being increasingly developed.

Although there are limitations, things like the OBJ and FBX file format meant that assets created in one tool could be used in another. The benefits here are clear, however, there have been other moves in the industry that take this well beyond the simple sharing of a model.

Pixar’s Universal Scene Description (USD) is one such development and has seen rapid uptake, not just by studios and artists, but by other developers who are keen to support the open-source tools. This move makes it easier, more efficient and effective for teams to contribute to a project, from different locations, software suits and pipelines.

Epic’s Unreal Engine has become a huge tool, not just for game development but for all manner of use cases, from architecture to virtual production. Support for Unreal Engine has broadened too, not just in terms of the user base but other developers. SideFX and Chaos amongst others have developed tools for Unreal Engine integration and going back to acquisition, Epic now owns Quixel and rapidly added Megascans to the Unreal ecosystem.

And let’s not forget “collaboration”, as this often binds competitive products together, allowing the end-user to create effortlessly and optimise their pipeline; it’s natural that Maya (Autodesk) works seamlessly with Nuke (The Foundry) and Photoshop (Adobe) with 3ds Max (Autodesk), for example.

How do I know what’s best?

This is where we come in. Here at Bluegfx we understand your production needs and can help inform your decisions, not just in where to invest, but how you can build a pipeline and workflow, so it provides you the best solution for staging effective and efficient processes.

We have a dedicated team of technical and workflow specialists that keep up to date on all aspects of the industry and the wider community, so they can guide you through the process of initiating or developing your tech needs.

Whether you need guidance on your existing pipeline, or simply want to chat about best practices or what tools you can integrate to best support your production workflow, or if you are looking to start something new, we can guide you through the process.


What have you got to lose? Schedule a call with our Tech Team.

We’d love to hear from you – please call us on +353 (0)1 526 6717 or email sales@bluegfx.eu


Photo by Annie Spratt on Unsplash

Bifrost for Maya is here!

Bifrost makes it possible for 3D artists and TDs to create serious effects in Maya quickly and easily using a new visual programming environment.

Ready-to-Use Graphs 

From dust motes and volumetric clouds to fire and explosions, artists can explore a library of pre-built Bifrost graphs in the new Bifrost Browser to create great looking effects fast. Graphs can also be used as a starting point for creating custom effects from scratch. Users can then publish their own graphs to the Browser for other artists to easily find and re-use.

One Graph


In a single visual programming graph called the Bifrost Graph Editor, technical artists and TDs can mix nodes ranging from math operations, to file IO, particles, volumes, mesh or surface operations, and even simulations.

Realistic Previews


With Arnold integrated in Maya, artists can see exactly what their effects will look like after lighting and rendering, right in the Arnold Viewport. They can also see near-final previews of their effects in Viewport 2.0.

Not only do accurate previews reduce the potential for last minute iterations later on in production, they also provide artists with a more creatively engaging experience by making it easier to see the results of changes they make at interactive rates.

Detailed Smoke, Fire, and Explosions


New solvers for aerodynamics (Aero) and combustion make it easy for artists to create deceptively natural-looking smoke, fire, and explosions.

Simple artistic controls let artists adjust boundary conditions for effects to create more physically-accurate interactions with the surrounding scene and set up adaptivity to automatically add or remove detail depending on characteristics such as velocity, turbulence, and smoke density.

The combustion solver performs computational chemistry and thermodynamics to simulate the physical reactions of fire, flames, and explosions.

Artists can choose from an array of realworld fuels like methane and butane, and the solver automatically generates realistic outputs such as (digital) carbon monoxide and water vapor.

The Material Point Method


The Material Point Method (MPM) was made famous by its use as a snow solver in Disney’s Frozen. Autodesk teamed up with Jixie Effects, founded by members of the original research team, to develop a production-ready MPM solver. One of the key advantages of MPM is that the behavior of simulations remains consistent as resolution increases. While the original MPM work focused on granular simulation of snow, Jixie Effects has extended the Bifrost MPM solver to now tackle other phenomena, including:

  • MPM Granular – Simulates granular materials such as sand and mud, as well as snow
  • MPM Cloth – Simulates dynamic thin shells and cloth such as textiles, aluminum, and plastics
  • MPM Fibers – Simulates dynamics of each fiber (or strand) individually

High-Performance Particle System


Using particles to drive aerodynamic and combustion simulations is a common and effective workflow. Entirely crafted using visual programming, the new particle system in Bifrost adds even more power to what was previously possible with particles in Maya with the ability to now break open and customize the system as needed.

Artistic Effects with Volumes


To create a desired effect, artists often need to convert between meshes, points and volumes, and process volumes for artistic effects. Bifrost comes loaded with a number of nodes for these purposes including converting meshes, points, and particles to volumes; converting volumes back to meshes; smoothing volumes; sampling properties of volumes; and scattering points inside volumes.

Technical artists can also use visual programming to artistically process volumes including advecting volumes with noise and adding noise to a volume.

Flexible Instancing


Bifrost introduces high performance, render-oriented instancing empowering users to create enormous complexity in their scenes without having to worry about overloading memory or slowing performance. At the heart of instancing is point-based geometry.

Artists can create instances with any number of procedural and artist-driven techniques, and then apply them to scattered points, particle systems such as the MPM solver, and vertices of any other geometry. Instance shapes are flexible and can be easily adjusted using a simple selection mechanism that can select between multiple layers of variation.

For example, one layer might distinguish between grass and flowers, and another might drill down to select variations of each.

Tightly integrated with Arnold instancing and Viewport 2.0, artists can instance any renderable Bifrost geometry including meshes, volumes, strands or points, as well as fully renderable assets in the form of render archives, such as Arnold .ass files.

Detailed Hair, Fur, and Fuzz


The world is full of fibres. From hair and fur to fuzz, clothing, grass, and even dust, artists frequently need to model things consisting of multiple fibres (or strands). Bifrost makes it simple for artists to do so procedurally.

  • Using colored strands, artists can draw flowlines, vectors, and links between different elements to better understand data.
  • It’s easy to alter the way strands look with controls for adjusting thickness, color, and orientation.
  • Strands can be rendered using Arnold as either ribbons for hair shaders or cylinders for all other Arnold curve rendering options.

Buy Autodesk Maya

Visit the Maya product page.


Contact Us

Call us on +353 (0)1 526 6717 or email sales@bluegfx.eu

Introducing MotionBuilder 2022!

This release includes support for Python 3, developer-focused improvements and workflow enhancements to help improve daily work for animators and developers.

Python 3 Support

Python 3 is now available for MotionBuilder providing performance, security, and dev-centric language enhancements, while support for PIP delivers instant access to the latest libraries and tools. Python 3 is the new default on Windows and Linux, but Motion Builder can still be launched in Python 2 mode by setting an environment variable or with a command line flag.

Developer-Focused Improvements

More powerful MotionBuilder Command Line:
The Motion Builder Python Command Line tool has been significantly improved. mobupy.exe can be used to call the MotionBuilder Python SOK. This opens the door to new capabilities like file processing and rendering from the Command Line.

Python/ APT:
The Motion Builder API has been expanded and improved to include additional capabilities based on common feedback from developers.

Python Editor Updates:
The Python Editor now displays an ‘X’ button on each script tab, providing a quick way to close scripts and allowing users to manage multiple script tabs more efficiently.

Customizable Splash Screen:
Tool developers can now modify the MotionBuilder splash screen with custom imagery, defined by a new environment variable named MOTIONBUILDER_ICON_PATH. This enables artists and studios to customize the startup experience to differentiate between different projects or tool versions.

Workflow Enhancements

Quaternions in the FCurve Editor:
Animators can now visualize real Quaternion Rotation properties within the FCurve Editor that match evaluation values, instead of displaying them as normal FCurves. “Real Quaternion Rotations”, a new checkable menu item, is now available in the contextual menu of the FCurve Editor. “Showing Real Quaternion Rotations” is now displayed in the FCurve Editor to help users know if the mode is enabled or not.

Character Extension Improvements:
The new “Add to Body Part” property enables artists to key a character extension when selecting the body part containing a reference object of that character extension. This simplifies the key framing process for character extensions by allowing users to animate them in conjunction with an associated body part.

Auxiliary Effector Usability:
When working with Character Controls, animators can control whether the auxiliary pivots/effectors should be automatically shown or not by using “Automatically Expand Auxiliaries”, a new preference added in the Character section. This saves time for animators by reducing the number of actions required to expand IK auxiliaries each time they select individual effectors in the Character Controls.

Relation Constraint Updates:
Artists can include relation constraints in an animation layer, while also controlling layer weight, improving the visibility of the layer relationships within the Navigator. Users can add an animation layer in a relation constraint to control the weight of the layer. The layers are now shown and parented to their respective takes in the Navigator.

Camera Plate Offset and Scale:
Users now have more precise control over the transformation of camera plates. New Offset X/Y and Scale X/Y options have been added to the camera back and front plates. These new options are enabled when the “Fit” option is unchecked and work in conjunction with the existing “Center” and “Maintain Ratio” settings. When the “Maintain Ratio” option is enabled, the new Scale Y value/animation is ignored, and Scale Xis used for both.

Group/ Set Enhancements:
New options in the Groups/Sets Tool are available to preserve the Show/Pick/Trs state of Groups/Sets when toggling a parent node. If toggling on the Group while the option is enabled, all nested groups will be turned on. If toggling on the Group while the option is disabled, only the nested groups that are already checked will be checked again.

Heads Up Display Enhancements:
The Navigator now supports drag and drop parenting of HUDs. The resulting HUD hierarchies can then be easily attached to cameras and rendered in the Viewer, making it easier to associate things like HUD style rig control setups with cameras.

Better Audio Display:
Similar to the FCurve Editor, the Camera Switcher now has the ability to display an audio waveform from an audio clip or a Story audio track, allowing animators to stay in context while using audio in the Camera Switcher. Previously, they would have to switch back to the FCurve Editor to see the audio waveform.

New Undo Window:
A new Undo window has been added to enable display and modification of a detailed view of the command history. This new feature allows users to see and update the current level within the undo stack.

Faster Image Loading:
When loading a scene file, images contained in the file can be now be loaded in parallel to speed up the loading process, giving users a significant speed boost when opening files with numerous, large images. This is especially important when working with scenes with image-based reference footage.

C3D Import Improvements:
The new “Up Axis Used” in File option allows users to specify that the up-axis embedded in the C3D file should be used on import, giving them more control over which axis is up in relation to HIK. In previous versions, users did not have access to the up-axis setting stored in a C3D file.

Viewport Selection Accuracy:
Several updates have been made to improve selection accuracy for objects in the Viewport allowing users to interactively select objects such as cameras and locators more predictably in the Viewport.

Story Tool Improvements:
Animators will experience workflow improvements and increased stability when working in the Story tool, based on feedback from power users.

Buy MotionBuilder here.


Contact Us

Call us on +353 (0)1 526 6717 or email sales@bluegfx.eu

What’s the problem?

With recent technical developments, compounded by the covid pandemic, the industry is now shifting more than ever to different working methods.

With many studios evolving and adopting various remote working scenarios, hardware and software management has become more of a logistical problem.

This can leave some uncertainties for many to manage and resolve. 

How many of the workforce will return to the office full time? How many want more flexible options? How to manage distribution and deployment of hardware and software, and maintain a working pipeline? Can I use my existing in-house technology to operate a hybrid workflow?

For existing facilities wondering whether the workforce should and could return to the office and in which kind of timeframe, is now the time to adopt new working practices or introduce additional capabilities, while retaining the benefits of existing tools, until they near their end of life?

The solutions?

Virtualisation can be the fix for all these concerns in one simple, scalable package. 

More people in the community have become aware of and are using cloud services such as rendering, virtual workstations, storage and archiving. However, this is but a tiny part of what is available, with not everyone realising there is a lot more on offer that can really benefit studios of any size.

However, there are many different options available and not all of them will work for every company, depending on your existing set up, your current working practices and what investment you want or can afford to put in. So, let’s look at some of the various options available for virtualisation and how a hybrid environment could work for you.

Setting up a fully virtual working environment is possible, and AWS recently introduced Nimble Studio, an all-in-one solution providing access to virtual workstations, high-speed storage, and scalable rendering across AWS’s global infrastructure.

This demonstrates the perfect example of what virtualisation can do for you. It makes it easy to set up your team, with the tools they need, all with scalability, controlled by an easy-to-use front end. Aligning these new intuitive tools helps to generate increased capabilities for your output, without the need to move premises, meaning existing infrastructure and hardware can remain providing a great hybrid solution.

If, when existing hardware nears its upgrade time, you decide to switch to full virtualisation, then the change of scale is a simple thing to do. If you decide that a physical premise, with the benefits that entails, is still an integral part of how you see the future, you can dovetail the two for as long as it needs, with virtualisation taking up the challenge on an ‘as needed’ basis.

Let’s use an example of a studio with 50 creatives of various disciplines, all needing workstation access, and the software they need to complete their work.

What about those who are fully remote or are only in the studio on certain days of the week? When the workforce becomes spread out, as is increasingly the case, the management of assets is infinitely more of a challenge.

What if a creative could log on to a virtualised workstation environment using a simple, cost-effective front end, that they can manage themselves, which is centrally monitored by the HQ, providing usage data and budget control. 

The virtual workstations would have all the software people needed, all controlled with specific images to suit the role and requirement, that could be easily updated and deployed with Nimble Studio.

If some people needed to change work and tools, Nimble can handle this too. Nimble can efficiently deploy new tools to the virtual workstations, delivering access to the right tools at the right time, with processing power scaled to keep them creative, eliminating the need for swapping out GPUs, or installing extra RAM or storage

Of course, the physical and cost benefits of virtualisation are clear and easily quantified but there are other elements to keep in mind. Virtualisation opens a business up to a global talent pool.

Not only are there opportunities to work with the best workforce available but you save on associated overheads like relocation packages and for some, a change of location might not be a possibility. Virtualisation instantly negates those hurdles.

An added benefit of this type of environment is that it not only helps you manage your current setup, but it can also scale up your requirements, allowing you to access additional resources when required.

These different perspectives on working practice need not be an either-or situation, as they can work seamlessly together, with part of the team being in the studio, other team members working remotely or on location using a virtual workstation, all sharing cloud-based file storage, retaining ease of project management and the collaborative nature that is paramount in the creative industry.

Similarly, all your data is secure in the cloud, perhaps even more so than your current in-house set-up.

What about the final image?

Rendering is a computationally-intensive process and while the render farms out there are gaining in traction, many studios prefer to use their own and while this gives ultimate control it comes at the cost of inevitable upgrades and associated costs.

Using virtual processing eliminates the need for all of this. Just scale the virtual tools for the shot at hand. No cooling issues, power fluctuation problems, or bottlenecks caused by files hitting the farm at once. 

AWS Deadline allows for this with ease. You choose the tools, control access, and get the shots back. It’s a fantastic solution for managing render projects, no matter which field you are in, from creative to engineering.

But what about my files?

The growing demand for higher quality content combined with the adoption of deep images and compositing has resulted in increased file sizes and challenging demands for storage.

Using cloud services such as Nimble or SimpleCloud is the best way to stay on top of this. Many of us are used to the cloud, with our photo apps on our phones, or lower-level options like Dropbox but where services such as Simplecloud excels is scalability and suitability for your projects.

There’s no need to buy hardware storage that is futureproofed anymore. In fact, with the rate of development currently seen, you’d need mystical levels or prophetic vision to know what will be required in even a year or two.

Cloud based systems leave the guesswork behind. Simplecloud, for example, scales as needed, maintaining suitable overheads for the needs of the day. On top of that is total control over deployment and access for the workforce.

You dictate who can access what, even to the extent that you define who can save files locally or who works totally virtually.

Is it secure?

Absolutely yes. In some ways, a virtual studio is more secure than having your own studio. Not only are there multiple redundancies to keep your files accessible but your files are safe from local risks like fire and flood damage and even theft.

Protecting your assets and IP are fundamental requirements and Simplecloud and AWS allow you to control this easily and from anywhere.

How do you use a virtual workstation?

This is possibly the best bit of all. Because this is a web-based system, anybody you offer access to can use any client machine that suits both your and their needs, in fact, an existing home computer or a tablet would both do the trick.

You can even have them working on a machine with one operating system but have the virtual workstation running another, which makes for exceptional additional opportunities for the established studio.

If a creative needs to work elsewhere, say on set or at a different facility, all their familiar tools will be available to them, just as if they were sitting at their usual desk on premises.

On top of this, it is a fantastic method of managing boosted news of your business at peak times. Let’s go back to our 50 strong workforce.

Say they were working on a movie, and all was going well when the VFX supervisor offered you another set of shots to produce.

If your studio is at physical capacity, it’s very easy to simply add extra team members virtually, just for the length of time needed for the work to be delivered. No need to buy in hardware that may lie dormant for much of the time.

In a nutshell?

Virtualisation is all about giving you the power to manage a scalable workforce that runs securely, efficiently and in concert with the needs of your business, reducing inactivity and offering you the power to move with the times.

You can maintain your existing setup for as long as you need but secure in the knowledge that you can stay agile, adapting to a changing industry and confident that you can scale to meet new opportunities. 

We suggest:

AWS Nimble services

Simplecloud


Contact Us

Call us on +353 (0)1 526 6717 or email sales@bluegfx.eu

Sitting down with Graham Bell

This is the time of year when much of the population takes stock, often deciding to make big changes to their lives, work or passions. In visual effects this is no different, so trying to find new, better, or just different ways of working is pretty common.

On top of that, there is the age-old debate around whether it is better to specialise or generalise. No matter what side of the coin you land on, the resulting actions are the same. It’s time to look at your existing skills to see what you need to learn next. This is the same if you are looking to supplement your current skill set, or change tack and discover new pipelines to fit into.

The problem here is knowing where to start.

Do you need to go back to school?

Buy a few books, or subscribe to an online course?

Well, BlueGFX can help out there. As well as the excellent products they also have in-house product specialists, whose years of experience in the industry can help and guide any upskilling needs you may have.

Rather than presenting a list of options here, we thought it might be more interesting and helpful to talk to one of the team’s veterans themselves, to get their take on the industry, skill sets and how to increase your expertise, so we sat down with Graham Bell, who shares his thoughts, followed by some free resources, to whet your appetite:

Tell us a bit about yourself, where did you start in CG and what’s been your journey?

I started as a traditional illustrator for a small studio in Bromley. We started using CG in the mid-90s just as it seemed to be taking off. I then moved into video games by joining Psygnosis where I cut my teeth in games, before eventually joining EA/Bullfrog. At EA I did a lot of mocap work and took those skills into VFX at MPC in London. Went back into games briefly at Codemasters and Climax until eventually joined Softimage and then Autodesk. That was followed by a spell in automotive visualisation, then back at MPC, and now with Bluegfx.

Are there any key projects that have shaped the way you think about CG? What were they and how have they helped you evolve?

Hard to call out anything in particular. But when I started in games in the PS1 era, there wasn’t much room to play with in terms of model and textures budget, so I had to be creative at times to make things work and be disciplined. These attributes and approaches are something that I’ve perhaps subconsciously carried forwards into other projects. 

Technology and software now is incredible, and capable of many things but some aspects can make you lazy, and it can be too easy to overcomplicate things. So being simple to begin with can be a good starting point.

Where do you stand on software elitism? Every package has its followers and fanboys but is there any benefit to that, or should you go for a more holistic approach?

People will always be passionate or protective about a software package. It could be the first one they learnt, or they just feel more comfortable using it. I’ve used many different softwares, some I like, some I don’t. It can be hard to break out of this mindset, but it’s important to be objective.

Everything has its strengths and weaknesses, and you should be open and willing to adapt and change. I know it’s a cliché, but the software is only one tool from a toolbox.

And it’s not those tools that make us good but rather how we employ them. And if we fail, it’s too easy to simply blame the tool used rather than how they were employed.

For people wanting to start 3D do you think they should take a general approach or specialise?

Good question and one that perhaps doesn’t have a definitive answer. I’ve heard points for both. Being a generalist gives you a broad set of skills, but you could be a jack of all trades and a master of none. And I think it’s very hard to be good at everything.

Starting out, having general skills is a good starting point, but with time you will naturally gravitate towards one particular area and that will help you focus. In a studio environment, you will have to collaborate with different areas and disciplines, so it’s important to have some understanding. 

People learn in different ways but can you suggest a surefire way to kickstart the process?

Be patient. With so much stuff out there, I can see how many, especially those just starting out, are keen to get moving quickly.

Though people can let their impatience with the lack of progress get the better of them and they try and dive straight into something.

However, they quickly get stuck and then maybe start to lose heart. So, the key is to be more diligent in your learning and consider being more pragmatic in your approach to improve your understanding.

Can you recommend a good launch point for people at the beginning of their CG journey?

There are so many resources available to people, it can be hard to know where to start. I would always start with the software vendors themselves. It’s easy to overlook the official resources, documentation and tutorials they provide.

Granted there’s a mixture of quality but it’s a good place to start before moving on. If a vendor can’t always provide materials, they will use their user communities.

More recently I’ve seen vendors sponsor known users to create tutorials on certain topics. There’s some very good stuff out there, so it’s worth checking out.

Some of these resources can be found below and make a fantastic start to the journey of complementing your existing skillset, allowing you to increase your knowledge base, for either enhancing your existing workflow or transitioning to a new one.

Resources to help learn graphic design:

  • Very good tutorials from Henning Sanden and Morten Jaeger, 2 senior character artists who have worked in VFX on some big shows
    • The main site has paid content, but their YouTube page has some good free content
    • They cover a variety of software, from Maya to Blender and Mari, etc
  • Mathias Røyrvik – Rigging TD who has worked in VFX. His YouTube site – has some great Maya/rigging tutorials and deeper content around math and Maya/Bifrost
  • Phil Radford – excellent Maya tutorials and has started doing more content on Maya/Bifrost on his YouTube site 
  • Ryan Manning – experienced user with some excellent Unreal tutorials on this channel
  • An online school specifically for character rigging and technical art. Paid and free courses.

Get in touch

Call us on +353 (0)1 526 6717 or email sales@bluegfx.eu

Autodesk Maya Preferences

Exploring the best settings that reduce Maya from crashing and also improve your workflow.

Let's Talk

Got a project in mind?

Call us for a chat today

on +353 (0)1 526 6717 or email

sales@bluegfx.eu

to find out how we can help!

Contact our Support Team

support@bluegfx.com

    Keep on top of this evolving industry by signing up to the Bluegfx newsletter