为什么steamvr没有开发虚拟双手模型的tp5 实例化模型

游戏开发完整学习路线(各个版本都有) : 经理人分享
游戏开发完整学习路线(各个版本都有)
来自:微浪科技&
作者:若朝若曦
链接:www.cnblogs.com/majianchao/p/6523455.html(点击尾部阅读原文前往)
在软件开发中,游戏开发这个方向看起来目标很明确,但其实是个领域很广的方向,入门的时候如果得不到指点一二,很容易误入歧途,相反,如果走这条路之前能得到前人的一些指路,是可以事半功倍的。
一、平台与编程语言选择
首先,游戏开发的平台就有很多类型:&
个人主机平台:Windows、Linux、Mac OC;
移动平台:iOS、Android、Windows Phone、BlackBerryOS、Symbian;
专业主机平台:Xbox、PlayStation、Will等。
如果你想从事移动平台的游戏开发,Android主流JAVA语言,iOS主流Object-C语言,那么你就要去学习这个平台主流的语言,而在个人主机上主流的游戏开发语言现在包括以后很长时间也还是C++。但是并不是所不同平台的游戏开发,就毫无共通之处,学了这个到了另一个平台就无用了。不是这样的,编程的世界里你走得越远你会发现很多东西的本质都是相似的,例如你如果扎实地理解了C++的基本语法(而不是死记硬背),日后再转Java语言也不是很难的事情,因为它们本质的面向对象特性还有各种编程语言特性都是相通的。
所以,如果你想走得远,在学习的时候不能简单流于表面,对于一样知识越深入到底层,你就越能看到其它知识与其的异曲同工之处,因此你将比别人更轻松。但是,当然一开始最好尽可能是贴合自己的专属领域和编程语言,这样能尽快地在这个领域持有一席之地。&无论你选择哪个平台,游戏开发这个浩大的工程都离不开游戏引擎,所以这里都有二条路线让你选择:&
只使用游戏引擎;
使用游戏引擎并深入学习游戏引擎原理。
线路一:只使用游戏引擎
对于第一条路线,游戏引擎的使用并不困难,因此适合想快速上手游戏开发工作的人。我们工作大多数时候也不会自己开发游戏引擎,所以这样对于一般的日常工作也没什么大碍。对于只使用游戏引擎的人来说,入门阶段要做的就是看该引擎的教程(书籍、视频、网上博客、网上教程、官方帮助文档等),并且熟练该教程所使用的编程语言。所以你要做的第一件事,就是去了解现在流行的游戏引擎。因为不同平台下的性能不同,支持的编程语言也不尽相同,所以针对不同平台下的流行的游戏引擎也是不同的(方括号里面是特别推荐的,基于开源性、易用性、性能与效果等方面的综合水平推荐):
windows或游戏主机:【Unreal】、寒霜、CE3
苹果ios:【unity3D】、【cocos2d-ObjC】、【Unreal】、sparrow、sprite kit。
安卓:【unity3d】、【cocos2d-x】、【Unreal】、AndEngine、libgdx。
网页:【Egret】、【cocos2d-html5】、Fancy3D、unity3d。
在选择游戏引擎的时候,尽可能选用流行的游戏引擎,因为这个游戏引擎流行意味着官方的功能比较完善和易用,问答社区也比较多人解疑。各个游戏引擎也各有千秋,Unity3d胜在简单易用,对性能要求不高,但是效果不够华丽,因而一般用在移动平台而不会用在电脑或游戏主机。Unreal胜在开源免费,而且效果也很出色,但是对设备要求比较高,因而常常用来开发效果绚丽的大型游戏。cocos2d-x也是开源免费的,在2D领域深有造诣,但是缺点是不可视化开发。另外,如果你想了解游戏引擎原理,那么首要考虑的就是开源的游戏引擎。不同游戏引擎使用的编程语言或者脚本语言可能是不一样的,所以当你想要学习某个游戏引擎的时候,也应该留意该游戏引擎所使用的语言。大部分情况是,Windows用的的是C++,同时结合lua或python脚本语言。而ios平台下使用Objective-C或swift,安卓平台下使用Java。但这不是必然的,例如Unity在各个平台下都可以用C#或者javascript脚本语言。下面是商业上流行的游戏引擎详细比较:
1)&Unreal4(虚幻4)
适用平台:Microsoft Windows、Mac&OS&X、iOS、Android、虚拟现实设备(包括但不限于SteamVR/HTC&Vive、Morpheus、Oculus&Rift及Gear&VR)、Linux、SteamOS及&HTML5平台。
编程语言:C++或无需编写代码。
优点:开源免费,画面效果出色,已有多款商业大作经验。同时,它还能跨平台,商店资源丰富,提供了无需编写代码即可制作游戏的强大功能(可视化开发)
缺点:开发机器配置要求较高。
2)&Unity3D
适用平台:iOS、Android、Windows&phone&8、Tizen、Microsoft&Windows、Windows&Store应用程序、Mac、Linux/Steam&OS、网络播放器、WebGL、PlayStation3、&PlayStation4、PlayStation&Vita版、Xbox&One、Xbox&360、Wii&U、Android&TV、Samsung&SMART&TV、Oculus&Rift、Gear&VR、Microsoft&Hololens、Playstation&VR。
编程语言:C#、javascript、Boo。
优点:方便易用,中文资料丰富,跨平台,商店资源丰富,可视化开发。
缺点:画面效果一般,不是免费开源,引擎效率比较低。
3)&Cocos2d系列(包Cocos2d-x、Cocos2d-ObjC、Cocos2d-html5、Cocos2d-xna等)
适用平台:Microsoft&Windows、OS&X、&Linux、iOS、&Android、&Tizen、&Linux、Mac&OS&X、HTML5浏览器、Windows&Phone&7&&&8、Xbox&360。
编程语言:Python、Objective-C、C++、&Lua、&JavaScript、&Swift、JavaScript、&C#。
优点:开源免费,跨平台,中文资料丰富,2D技术丰富且成熟。
缺点:不可视化开发,3D方面技术不成熟。
4)&CryEngine3
适用平台:Microsoft&Windows、OS&X、Linux、PlayStation&3、PlayStation&4、Wii&U、Xbox&360、Xbox&One、iOS、Android。
编程语言:C++、Lua。
优点:画面效果世界一流,可视化开发,跨平台。
缺点:开发机器配置要求较高,不是免费开源。
5)&Frostbite3(寒霜3)
适用平台:Microsoft&Windows、PlayStation&3、PlayStation&4、Xbox&360、Xbox&One。
编程语言:C++。
优点:画面效果出色,可视化开发,跨平台。
缺点:开发机器配置要求较高,不是免费开源。
6)&Egret(白鹭)
适用平台:HTML5、iOS、Android、Windows&Phone。
编程语言:TypeScript、JavaScript。
优点:免费开源,配套工具多,全中文文档,可视化开发,跨平台。
缺点:支持平台较少,大多数时候用于开发对性能和效率要求不高的小游戏。
但是使用游戏引擎毕竟是停留在表面看不到本质,当游戏开发的时候出现了某些问题或者特殊需求,你往往很难处理。而且游戏引擎技术日新月异,你所学的几乎都是别人留下来的方法,而不是其本质思想。如果将来别的游戏引擎兴起,你又要重新学习另外的游戏引擎怎么使用。在很多大公司里面,甚至都有自己公司内部专用的引擎(如网易公司的风魂引擎、金山公司的剑网3引擎、蜗牛公司的Flexi引擎等等)。所以单纯地走这条路线,可能不会让你走得很远并且可能会日渐乏味。&
线路二:使用游戏引擎并深入了解游戏引擎原理
对于第二条路线,深入学习游戏引擎原理短时间不会让你的工作有什么卓越的成效,但是可以弥补只使用游戏引擎导致的问题,而且学习游戏引擎原理,更多时候不是一定就为了自己日后要造个游戏引擎,而是更深入理解自己的游戏引擎,更好地使用它。当然,千里之行始于足下,经常学习游戏引擎底层原理的人到了一定的积累的时候,自己造个游戏引擎也是可以的。&对于像学习游戏引擎原理的人来说,需要制定自己明确的长期目标:&1) 首先,一开始可能对游戏引擎没什么概念,建议先使用一两个游戏引擎(我推荐C++程序员学习虚幻4游戏引擎,现在是免费开源的,非常值得学习。那些不开源的游戏引擎尽量不要去学习,因为其对于学习原理来说没有多大贡献)。&2) 因为游戏开发就是一门实时渲染的艺术,所以离不开图形库的学习。现在主流的图形库是Windows平台专用的DirectX 11和通用平台的Opengl。这两个图形库,至少需要入门其中一个。&3) 然后,你就可以开始看计算机图形学的书籍了。如果你发现你的理论底子不足,这个时候就需要补一下线性代数、基本的微积分、还有3D数学的知识。&4) 游戏引擎中包含了各个部分,上面的渲染只是游戏引擎的一部分。还有其它诸如着色器编写、地形编写、物理引擎编写、模型和动画、人工智能体设计、网络编程等等,完成渲染部分的基本学习后,你可以选择自己感兴趣的部分进行专攻,毕竟我们实际工作大多数时候是团队开发,而不是单打独斗,每个人都应该精通自己所擅长的那部分。这里为了文章的紧凑性,我将游戏引擎的基础理论知识和各个引擎组件的推荐书籍放到了文章的附录处。以下书籍推荐参考自:Clayman's Graphics Corner
链接:http://www.cnblogs.com/clayman/archive//1459001.html
下表是基础理论知识和游戏引擎组件书籍推荐(不必全看,请有选择地阅读):
1) 数学基础
下面这些数学基础书籍是为游戏开发量身定制的,比专门看某一数学方向更有效率:
《3D&Math&Primer&for&Graphics&and&Game&Development》,有中文版
《Mathematics&for&3D&Game&Programming&and&Computer&Graphics》
《Essential&Mathematics&Guide》
《Geometric&Tools&for&Computer&Graphics》
针对 DirectX 的书籍:
《Introduction&to&3D&Game&Programming&with&DirectX&11》,大名鼎鼎的龙书,入门必看的
《Practical&Rendering&and&Computation&with&Direct3D&11》,进阶
《Real-Time&3D&Rendering&with&DirectX&and&HLSL》
针对 OpengGL 的书籍:
《OpenGL&SuperBible》,著名的蓝宝书,以例子为主,有中文版。但不建议看中文版,翻译得不好。
《OpenGL&Programming&Guide》,著名的红宝书,有中文版。红宝书更像是手册,API大全,例子较少,适合熟练者查询使用。
《OpenGL&4.0&Shading&Language&Cookbook》,进阶,基本上就是API手册,有中文版。但不建议看中文版,翻译得不好。
3) 计算机图形学
《The&Nature&of&Code》,有中文版,比较简单
《Fundamentals&of&Computer&Graphics》,被国外多所大学采用的入门教材,介绍范围比较广,从基本的相关数学到建模、渲染、动画、应用方面都有提及
《Physically&Based&Rendering》,主要是离线渲染
《Real-Time-Rendering》,必读经典!必读!
《计算机图形学》,作者舍利
《Computer&Graphics》,要看最新的第三版
《计算机图形学原理及实践:C语言描述》
4)&Shader着色器
《Cg_tutorial》,入门
《The&Complete&Effect&and&HLSL&Guide》,入门
《Shaders&for&Game&Programmers&and&Artists》,含有大量入门例子
《Advanced&Lighting&And&Materials&With&Shaders》,介绍光照模型和技术
《GPU&Gems》进阶必读。
《Shader&X》系列,每年出版一本,包含最新的实时渲染技术。论文性的比较多,偏难。
《Programming&Vertex,&Geometry,&and&Pixel&Shaders》,以DirectX&10为主,很详细。
《Real&Time&3D&Terrain&Engines&Using&C++&And&DX9》非常全面的讨论了关于地形渲染的各种技术。
6)&模型导入和动画
《Character&Animation&With&Direct3D》包含了最新的游戏动画技术
《Computer&Animation》
《Real-Time&Cameras》
《Computer&Facial&Animation》
《Realtime&3D&Character&Animation&with&Visual&C++》
《Advanced&Animation&and&Rendering&Techniques》
《Cloth&Modeling&and&Animation》
《TCP/IP&详解&卷2》
《Network&Programming&for&Microsoft&Windows》
《Advanced&Programming&in&the&UNIX&Environment》
《Windows核心编程》
《Multithreading&applications&in&Win32》
《网络游戏核心技术与实战》
《Game&Physics》
《Game&Physics&Engine&Development》
《Real-time&Collision&Detection》,碰撞检测方面最好的书
《3D&Game&Engine&Design,&2nd》第8、9章值得一看
9) 细节层次
《Level&of&Detail&for&3D&Graphics》
10) 光线跟踪
《Physical-Based&Rendering&-&From&Theory&to&Implementation》
《Another&Introduction&to&Ray&Tracing》
11) 人工智能
《Programming&Game&AI&by&Example》有中文版
《Artificial&Intelligence&for&Games》
《AI&Programming&Wisdom》
《AI&Game&Engine&Programming》
《Game&Programming&Gems》,文章的范围比较广,选择性阅读《Color&and&Light&in&Nature》
《Digital&Design&of&Nature》
《Form+Code&in&Design,&Art,&and&Architecture》
一下这张图片非原创,出自:作者星铃丹,授权发布,转载请注明出处。写的非常详细,拿出来和大家分享下!
●本文编号2267,以后想阅读这篇文章直接输入2267即可。
●本文分类“游戏开发”搜索分类名可以获得相关文章。
●输入m可以获取到文章目录
本文内容的相关公众号推荐Linux学习↓↓↓
C/C++编程↓↓↓
更多推荐《》
涵盖:程序人生、算法与数据结构、黑客技术与网络安全、大数据技术、前端开发、Java、Python、Web开发、安卓开发、iOS开发、C/C++、.NET、Linux、数据库、运维等。
(下载iPhone或Android应用“经理人分享”,一个只为职业精英人群提供优质知识服务的分享平台。不做单纯的资讯推送,致力于成为你的私人智库。)
作者:佚名
文章相关知识点
评论&&|&& 条评论
畅阅·猜你喜欢This release includes hundreds of updates from Epic and 71 improvements submitted by the incredible community of Unreal Engine developers on GitHub! Thanks to each of these contributors to Unreal Engine 4.14:
Adam Moss (adamnv), Alan Edwardesa (alanedwardes), Andreas Axelsson (judgeaxl), Andreas Schultes (andreasschultes), Andrew Armbruster (aarmbruster), Artem V. Navrotskiy (bozaro), Audiokinetic Inc. (audiokinetic), BaxterJF, CA-ADuran, Cameron Angus (kamrann), Cengiz Terzibas (yaakuro), Christian Hutsch (UnrealEverything), CodeSpartan, Cuo Xia (shrimpy56), Damir Halilovic (DamirHalilovic), dcyoung, Deniz Piri (DenizPiri), Dennis Wissel (dsine-de), Dominic Guana (jobs-git), Dorgon Chang (dorgonman), dsine-de, Filip Brcic (brcha), Hakki Ozturk (ozturkhakki), Hannah Gamiel (hgamiel), Hao Wang (haowang1013), Jarl Gullberg (Nihlus), Jason (Abatron), Jeff Rous (JeffRous), Jeremy Yeung (jeremyyeung), J&rgen P. Tjern& (jorgenpt), Josh Kay (joshkay), jpl-mac, KashiKyrios, Kory Postma (korypostma), Kyle Langley (Vawx), Laurie (Laurie-Hedge), Lei Lei (adcentury), Leszek Godlewski (inequation), Marat Radchenko (slonopotamus), Matthew Davey (reapazor), Matthias Huerbe (MatzeOGH), Matthijs Lavrijsen (Mattiwatti), mbGIT, Michael Geary (geary), Michail Nikolaev (michail-nikolaev), Moritz Wundke (moritz-wundke), Narendra Umate (ardneran), Nelson Rodrigues (NelsonBilber), null7238, Paul Evans (paulevans), PjotrSvetachov, projectgheist, Rama (EverNewJoy), rcywongaa, rekko, Ryan C. Gordon (rcgordon), sangpan, S&bastien Rombauts (SRombauts), Shihai (geediiiiky), stfx, straymist, Theodoros Ntakouris (Zarkopafilis), tmiv, ungalyant, Webster Sheets (Web-eWorks), x414e54, yehaike, YossiMHWF, Yukariin, Zachary Burke (error454), Zhiguang Wang (zhiguangwang)
What&s New
Unreal Engine 4.14 introduces a new forward shading renderer optimized for VR, enabling crisp multi-sampled anti-aliasing in your games. The new Contact Shadows feature renders beautifully detailed shadows for intricate objects. We've also introduced a new automatic LOD generation feature for static meshes that does not require a third-party library.
We&ve streamlined the animation tools to help you be more productive, and added many new features to Sequencer (UE4&s non-linear cinematic tool), as well as improvements to vehicles, clothing and animation Blueprints.
For mobile developers, Vulkan support is ready to use on compatible Android devices! And, we've added various new mobile rendering features such as reading from scene color and depth, and the ability to draw 3D objects on top of your UI.
On the Windows platform, C++ programmers can now use Visual Studio &15& for development. Visual Studio 2015 is still supported.
Major Features
New: Forward Shading Renderer with MSAA
The new forward shading renderer combines high-quality UE4 lighting features with Multisample Anti-Aliasing (MSAA) support! MSAA and the option to enable per-material optimizations make the forward renderer well suited for VR.
The forward renderer works by culling lights and reflection captures to a frustum-space grid. Each pixel in the forward pass then iterates over the lights and reflection captures affecting it, shading the material with them. Dynamic shadows for stationary lights are computed beforehand and packed into channels of a screen-space shadow mask allowing multiple shadowing features to be used efficiently. Enable &Forward Shading& in the Rendering Project settings and restart the editor to use the forward renderer.
Supported forward rendering features include:
Full support for stationary lights, including dynamic shadows from movable objects which blend together with precomputed environment shadows
Multiple reflection captures blended together with parallax correction
Planar reflections of a partial scene, composited into reflection captures
D-Buffer decals
Precomputed lighting and skylights
Unshadowed movable lights
Capsule shadows
Instanced stereo compatible
Some features are not yet supported with Forward Shading:
Screen space techniques (SSR, SSAO, Contact Shadows)
Shadow casting Movable Lights
Dynamically shadowed translucency
Translucency receiving environment shadows from a stationary light
Light functions and IES profiles
Alpha to Coverage
MSAA on D-Buffer decals, motion blur, dynamic shadows and capsule shadows
The forward renderer supports both multi sample anti-aliasing (MSAA) and temporal anti-aliasing (TAA). In most cases TAA is preferable because it removes both geometric aliasing and specular aliasing. In VR, the constant sub-pixel movement introduced by head tracking introduces unwanted blurriness, making MSAA a better choice.
Projects that choose to use MSAA will want to build content to mitigate specular aliasing. The &Normal to Roughness& feature can help reduce specular aliasing from detailed normal maps. Automatic LOD generation for static meshes can flatten features on distant meshes and help reduce aliasing from small triangles.
In our tests, using MSAA instead of TAA increases GPU frame time by about 25%. Actual cost will depend on your content.
To use MSAA, set the default Anti-Aliasing Method in the Rendering project settings:
The console variable &r.MSAACount& controls how many MSAA samples are computed for every pixel. &r.MSAACount 1& has special meaning and falls back to Temporal AA, which allows for convenient toggling between anti-aliasing methods.
Performance
The forward renderer can be faster than the deferred renderer for some content. Most of the performance improvement comes from features that can be disabled per material. By default, only the nearest reflection capture will be applied without parallax correction unless the material opts-in to High Quality Reflections, height fog is computed per-vertex, and planar reflections are only applied to materials that enable it.
Leveraging these options in Epic&s new VR game, Robo Recall, the forward renderer is about 22% faster than the deferred renderer on an NVIDIA 970 GTX.
New: Contact Shadows
Contact shadows allow for highly detailed dynamic shadows on objects.
The ivy below is only a few flat cards but is able to self-shadow in a very convincing way due to outputting Pixel Depth Offset in the material.
The Contact Shadows feature adds a short ray cast in screen space against the depth buffer to know whether a pixel is occluded from a given light or not. This helps provide sharp detailed shadows at the contact point of geometry. There are a number of reasons why shadows through other algorithms may have missing or blurry contacts. Typically it is due to lack of resolution or a depth bias. Regardless of the reason, the new Contact Shadows feature can fill in the gap very well for a small cost.
Contact shadows can be used by setting the Contact Shadow Length property on your light. This controls the length of the ray cast in screen space where 1 is all the way across the screen. Large values can degrade quality and performance so try and keep the length to the minimum that achieves your desired look.
Another use case of contact shadows is to get self-shadowing from the parallax occlusion mapping from arbitrary lights. This requires outputting pixel depth offset in the material. This animation shows a parallax occlusion mapped surface with contact shadow length set to 0.1.
New: Automatic LOD Generation
Unreal Engine now automatically reduces the polygon count of your static meshes to create LODs!
The above animation shows five LODs that were generated automatically. Each is half the number of triangles as the previous.
Automatic LOD generation uses what is called quadric mesh simplification. The mesh simplifier will calculate the amount of visual difference that collapsing an edge by merging two vertices would generate. It then picks the edge with the least amount of visual impact and collapses it. When it does, it picks the best place to put the newly merged vertex and removes any triangles which have also collapsed along with the edge. It will continue to collapse edges like this until it reaches the requested target number of triangles.
This mesh simplifier maintains UVs including generated lightmap UVs, normals, tangents, and vertex colors. Because UVs are maintained the same materials can be used as well as all LODs can share the same lightmap.
The high level settings for controlling the generated LODs are in the static mesh viewer under LOD Settings.
&LOD Group& provides a list of presets. These can be changed per project in BaseEngine.ini under [StaticMeshLODSettings]. We encourage you to set up good categories for your project and mostly use LOD groups instead of controlling the details of every LOD.
An important setting to note is &Auto Compute LOD Distances&. Because the algorithm knows how much visual difference every edge collapse is adding it can use this information to determine what distance that amount of error is acceptable. That means it will automatically calculate the screen size to use for each LOD as well.
If you wish to muck with the details of auto generation for each LOD they can be found under Reduction Settings. Note that this feature currently only works with static meshes and that mesh proxy LOD generation is not yet supported.
New: Precomputed Lighting Scenarios
We now support precomputing lighting for multiple lighting setups with the same geometry! This is especially important for use cases such as VR and architectural visualization where you need the highest possible quality at the fastest possible performance.
In the above example the directional light, sky light and skybox have been placed in a Lighting Scenario level called DayScenario. The streetlights have been placed in NightScenario.
To use Lighting Scenarios:
Right click on a sublevel in the Levels window and change it to Lighting Scenario. When a Lighting Scenario level is made visible, its lightmaps will be applied to the world.
Change the level streaming method to Blueprint on the Lighting Scenario level
Place meshes and lights into this level and build lighting
In the BeginPlay of your persistent level&s Level Blueprint, execute a Load Stream Level on the Lighting Scenario level that you want active.
Limitations:
Only one Lighting Scenario level should be visible at a time in game.
When a Lighting Scenario level is present, lightmap data from all sublevels will be placed inside it so that only the DayScenario lightmaps are loaded when it&s daytime. As a result, lightmaps will no longer be streamed by sublevel.
A Reflection Capture updated is forced when making a Lighting Scenario level visible, which can increase load time.
New: Improved Per-Pixel Translucent Lighting
In the deferred renderer, the new forward shading functionality can now be used on translucent surfaces to get specular highlights from multiple lights and image-based reflections from parallax-corrected reflection captures!
New: Full Resolution Skin Shading
UE4 now supports full resolution skin shading for the Subsurface Profile shading model. This provides high-fidelity lighting for surface details such as pores and wrinkles.
Checkerboard rendered skin (left), Full resolution skin (right) (Note: 3D head model by Lee Perry-Smith)
Surface detail - checkerboard (left), full resolution (right)
Previously, lighting on skin was represented using a checkerboard pattern, where half the pixels contained diffuse lighting and the other half, specular lighting. The lighting was recombined during a final subsurface profile fullscreen pass. That approach gave good results for subsurface lighting (which is low-frequency by nature), but it could result in lower fidelity lighting for surface details.
With the new approach, every pixel contains diffuse and specular lighting information, packed into an RGBA encoding. This allows us to reconstruct full-resolution lighting during the final subsurface profile fullscreen pass, giving better results for surface details and more stable behavior with temporal antialiasing.
Compatibility Full resolution skin shading requires at least a 64-bit scene color format with a full alpha channel. The default FloatRGBA scene color format works fine, but 32-bit representations such as FloatRGB are not supported. If the scene color format is not compatible with full resolution skin, we fall back to checkerboard-based lighting. This behaviour can be overridden using the r.SSS.Checkerboard console variable. The possible values for this are:
0: Checkerboard disabled (full resolution)
1: Checkerboard enabled (old behavior)
2: Automatic (default) -
Full resolution lighting will be used if the scene color pixelformat supports it Limitations It&s worth noting that the full-resolution skin shading is an approximation. It works well in the vast majority of cases, but certain material features can be problematic due to the encoding method. In particular:
Metallic materials
Emissive materials
These features will work, but you may notice differences in output compared to checkerboard due to the packed RGBA diffuse/specular encoding. It is possible to workaround particular issues when authoring materials by setting the opacity to 0 in areas where skin shading is not desirable. Pixels with an opacity of zero are treated as default lit for the purposes of shading.
Note: Masking non-opaque pixels in this way is also worthwhile for performance reasons, since these pixels are bypassed by the subsurface postprocess.
Performance Considerations
f your title has a 64-bit scene color format then full resolution subsurface lighting will typically be faster than checkerboard due to the reduced number of texture fetches. However, if your title has a 32-bit scene color then the performance gain from the reduced texture bandwidth will likely outweigh the benefits (although this is hardware dependent).
New: Reflection Capture Quality Improvements
When you use Reflection Captures, the engine mixes the indirect specular from the Reflection Capture with indirect diffuse from lightmaps. This helps to reduce leaking, since the reflection cubemap was only captured at one point in space, but the lightmaps were computed on all the receiver surfaces and contain local shadowing.
(With lightmap mixing on the left, without on the right)
Mixing works well for rough surfaces, but for smooth surfaces the reflections from Reflection Captures no longer match reflections from other methods, like Screen Space Reflections or Planar Reflections.
Lightmap mixing is no longer done on very smooth surfaces.
A surface with roughness .3 will get full lightmap mixing, fading out to no lightmap mixing by Roughness .1 and below. This allows Reflection Captures and SSR to match much better and it's harder to spot transitions.
The below shot shows mirror surface reflections before and after. Note the difference in the reflection of the wall between SSR and reflection captures. The artifact is especially noticeable in motion, because it will move with your camera due to SSR limitations.
This affects existing content - in cases where you had reflection leaking on smooth surfaces, that leaking will be much more apparent. To solve this, place additional reflection probes to reduce the leaking. Levels should have one large spherical capture at a minimum. You can also revert to the old lightmap mixing behavior with a rendering project setting:
New: Visual Studio &15& Support
Unreal Engine 4.14 now the upcoming Visual Studio &15& out of the box. Visual Studio 2015 is still supported as well. Visual Studio &15& is currently available in &Preview& from .
If you have multiple versions of Visual Studio installed, you can select which to use through the &Source Code& section in &Editor Preferences.&
New: Create Static Mesh from Actors
You can now right-click actor(s) in the level viewport and convert their current state to a new Static Mesh asset. This even works with skeletal meshes, so you can capture a mesh from posed characters.
New: NVIDIA Ansel Support
UE4 4.14 adds support for NVIDIA Ansel Photography! Ansel is a new tool from NVIDIA that enables players to take in-game screenshots. While in Ansel mode the game will pause and players will have camera control to compose shots and apply various screen effects. It can also capture a variety of screenshots, from HDR to 360 stereo. .
Ansel support is now exposed as a new UE4 plugin. After enabling the plugin in your project, you can access Ansel in a standalone game session.
(Viewing an Ansel 360 capture in a web browser)
We have also exposed functions on the Player Camera Manager class so your games can customize Ansel capture behavior. Games may wish to limit the distance of camera movement, disable UI elements, disable/enable certain lighting or post processing effects, etc. Thanks to Adam Moss and NVIDIA for providing the implementation. To get started using this feature, check out the &Ansel_integration_guide.html& document under the Ansel plugin folder. Official UE4 documentation for Ansel will be coming soon.
New: Improved Cable Component
The Cable Component plugin has been updated with new features, including collision support and sockets for attaching objects or effects.
Cable Component now includes these new features:
Simple collision, including friction settings
Stiffness setting, which tries to reduce bending
Sockets at each end of the cable
Ability to set either end to &free&
New: UI Font Outlines
Fonts for UMG and Slate now have an optional outline that can be applied to them.
Any widget that specifies a font can change the outline setting, color, or material to be used with the outline.
A font material on an outline can be used in the same way that any other font material is used except that a material specified for an outline only applies to the outline. Font materials can be used on the outline to create lots of different effects.
New: Editable Map and Set Properties
We now support editing Map and Set properties from within the Details Panel!
Sets are similar to Arrays, but you can never have the same element in a set twice and the order of elements is not guaranteed. However, it&s extremely quick to lookup into a set to see whether it contains an element.
Maps will have a key and a value and you can edit both within the details panel. Like Sets, all keys must be unique, and the order of elements is not guaranteed to persist. However, it&s very quick to lookup an element&s value as long as you know it&s key.
New: Vector Noise in Materials
The Noise material graph node includes several functions useful for procedural shading that produce a single-valued (scalar) result. |
| Cellnoise | Vector Noise | Gradient | Curl | Voronoi |
The new Vector Noise node adds several more with 3D or 4D vector results. Due to the run-time expense of these functions, it is recommended that once a look is developed with them, all or part of the computation be baked into a texture using the Draw Material to Render Target Blueprint feature introduced in 4.13. These material graph nodes allow procedural looks to be developed in engine on final assets, providing an alternative to creating procedurally generated textures with an external tool to apply to assets in the engine. The new functions are:
1. Cellnoise: Returns a random color for each cell in a 3D grid (i.e. from the mathematical floor operation applied to the node input). The results are always consistent for a given position, so can provide a reliable way to add randomness to a material. This Vector Noise function is extremely cheap to compute, so it is not necessary to bake it into a texture for performance.
2. Perlin 3D Noise: Computes a version of Perlin Simplex Noise with 3D vector output. Each output component is in the range -1 to 1. Computing three channels of noise output at once is cheaper than merging the results from three scalar noise functions.
3. Perlin Gradient: Computes the analytical 3D gradient of a scalar Perlin Simplex Noise. The output is four channels, where the first three (RGB) are the gradient, and the fourth (A) is the scalar noise. This is useful for bumps and for flow maps on a surface
4. Perlin Curl: Computes the analytical 3D curl of a vector Perlin Simplex Noise (aka Curl Noise). The output is a 3D signed curl vector. This is useful for fluid or particle flow.
5. Voronoi: Computes the same Voronoi noise as the scalar Noise material node. The scalar Voronoi noise scatters seed points in 3D space and returns the distance to the closest one. The Vector Noise version returns the location of the closest seed point in RGB, and the distance to it in A. Especially coupled with Cellnoise, this can allow some randomized behavior per Voronoi cell. Below is a simple stone bed material using the distance component of the Vector Noise / Voronoi to modulate some surface bumps and blend in moss in the cracks, and the seed position together with Vector Noise / Cellnoise to change the color and bump height per rock.
Perlin Curl and Perlin Gradient can be added together in octaves, just as regular Perlin noise can. For more complex expressions, it is necessary to compute the gradient of the result of the expression. To help with this, place the expression to compute into a material function and use it with the helper nodes Prepare3DDeriv, Compute3DDeriv, and either GradFrom3DDeriv or CurlFrom3DDeriv. These use four evaluations of the base expression spaced in a tetrahedral pattern to approximate these derivative-based operations. For example, this network uses the gradient to compute bump normals from a bump height function.
New: PhysX 3.4 Upgrade
Unreal Engine now uses the latest version of NVIDIA PhysX, which is 3.4. This brings improved performance and memory usage for rigid bodies and scene queries (especially multi-core performance.)
This version of PhysX allows for Continuous Collision Detection (CCD) on kinematic objects, which allows for accurate collisions between very fast moving rigid bodies! In the animation below from a Robo Recall test level, a player is swiping a weapon to impact an oncoming bullet!
New features available to use in UE4 right away:
Continuous Collision Detection (CCD) support for kinematic objects (shown in the animation above!)
Faster updating of kinematic objects
Faster convex hull cooking
In future releases, we&ll expose more new physics features available in the latest version of PhysX.
New: Animation Editor Revamp
Animation-related tools have been split into separate asset editors rather than using one editor with multiple modes.
Many other improvements have been made as well. Functionality that is common to each of the editors is now generally found in the viewport and the improved Skeleton Tree.
The Skeletal Mesh editor has had modifications to its layout and to the asset details panel, specifically the materials and LOD sections have been overhauled.
The Skeleton editor has had its layout tweaked and the skeleton tree itself has been polished.
The Animation editor has had its layout tweaked and the asset browser has gained the ability to optionally add and remove its columns.
The Animation Blueprint editor has had its layout tweaked to more closely follow that of the standard Blueprint editor. The Anim Preview Editor can now optionally apply changes that are made to the preview&s properties to the class defaults.
Asset Shortcut Bar
You can jump between related animation assets that share a skeleton using the improved Asset Shortcut Bar.
Recording Moved to Transport Controls
Recording used to be performed via a button in the toolbar. Now it has been moved to a recording button in the transport controls, similar to Sequencer.
Preview Scene Setup
The objects in the scene and their animation can be modified in each of the editors via the &Scene Setup& menu. This allows preview animations to be applied, different preview meshes to be set (this is either specified for the skeleton or for individual animations) and additional meshes to be attached. Additional meshes are now specified as separate editor-only assets that define a set of skeletal meshes that are driven as slaves of the main mesh.
New: Animation Curve Window
You can now easily tweak Animation Curves using the new dedicated window for this in the Animation Editor. Curves are previewed live as you edit them.
Previously you could only configure curves on the animation assets themselves, but now you&ll set these for the skeleton instead.
New: Child Actor Templates
Child Actor Components added to a Blueprint can have their properties customized via Child Actor Templates.
Once you add a Child Actor Component, you will see an expandable template in the Details panel of the owning Actor's Blueprint Editor. From here, you can access all the properties of the Child Actor, including public variables. For example, if you have Blueprint_A
containing a PointLight Component with a public variable driving its color, and then make that Blueprint a Child Actor Component within *Blueprint_B*, you can now adjust that color variable from within *Blueprint_B's Details panel!
This is a dramatic improvement over previous behavior, wherein users were restricted to the default properties of the Child Actor Component and could only make updates via gameplay script.
New: Default Animation Blueprint
Allows you to assign an animation Blueprint to a skeletal mesh that will always be run after any animation Blueprint assigned in the component. This allows you to set up anim dynamics or other controllers that will always be applied, whether that mesh is viewed in the animation tools, a Sequencer cinematic or just placed in a level.
This allows for dynamics, controllers, IK or any other anim Blueprint feature to be related to a mesh and not have to be duplicated in every animation Blueprint intended to be used on that mesh.
&Post process& animation Blueprints also have their own native and Blueprint update step so parameters can be read or calculated for use in the animation graph.
New: Landscape Editing in VR
You can now create and sculpt terrain and paint landscape materials using motion controllers in VR!
You can summon the Landscape Editing tools from the &Modes& panel on your Quick Menu. Then choose a brush from the UI and start painting! If you hold the &Modifier& button on the motion controller, you can erase instead of painting.
New: Improved Support for Vehicles
We&ve changed where tire forces are applied. Previously, tire forces were applied at the vehicle&s center of mass. We now apply force at the tire&s center of mass which makes it easier to achieve load sway in cars.
We&ve also added Simple Wheeled Vehicle Movement Component which provides wheel suspension and tire friction without the complexities of engine and drivetrain simulation. This component allows you to easily apply torque to individual tires. All components inheriting from Wheeled Vehicle Movement Component can now be used on arbitrary components, and you no longer have to rely on the Wheeled Vehicle actor.
Existing content will automatically have Deprecated Spring Offset Mode set to true which will maintain the old behavior. You can tune this behavior further by changing Suspension Force Offset.
New: Improved Vulkan support on Android
Unreal Engine 4.14 is ready for shipping games with Vulkan support!
UE4 supports Android 7 (Nougat) devices with Vulkan drivers as well as the Samsung Galaxy S7 running a recent OTA update.
Many rendering issues have been fixed with the UE4 Vulkan renderer on Android devices.
The renderer will automatically fall back to OpenGL ES when launched on Android devices that are not Vulkan-capable.
Vulkan support on specific devices and driver versions can now be enabled or disabled using device profiles, with fallback to ES 3.1 and ES 2. This allows UE4 games to disable Vulkan support and use OpenGL ES on phones with incomplete or broken Vulkan implementations.
New: Support for Custom Depth on Mobile
Custom Depth is now supported in the mobile rendering path. Custom post-process materials can now sample from Scene Depth, Custom Depth as well as Scene Color.
As it requires post-processing, Mobile HDR must be enabled, and the feature does not currently work while Mobile MSAA is enabled.
New: Scene Capture Improvements on Mobile
When rendering scene captures, the Scene Capture Source settings that output Inverse Opacity and Depth values are now supported on mobile.
The &SceneColor (HDR) in RGB, Inv Opacity in A& option can be used to render objects with translucency into a texture which can then be alpha-blended over a scene or widget blueprint.
Similarly, the depth value can be used as a mask when using the resulting texture.
Generating the opacity data has some cost, so use &SceneColor (HDR) in RGB, 0 in A& for improved performance if you do not need opacity
Scene captures now work correctly on devices that do not support floating point targets, such as Galaxy S6 prior to Android 6.
New: Improved Cloth Skinning
We have added the ability to calculate our own mesh-to-mesh skinning data for clothing within the engine, so rather than using the render data exported in an .apx or .apb file we now use the render data UE4 already has. We take the simulation mesh from the APEX-exported asset and reskin our render data onto that mesh. This means that the final data should look as good as the data you originally imported.
This brings a few benefits. Normals could previously appear incorrect (see image below) and you were previously restricted to one UV channel. Both of these issues are solved with the new skinning system.
New: Material Attribute Nodes
Working with material attributes is now easier to read and less error prone as part of an ongoing update to improving extensibility of material properties.
GetMaterialAttributes - This node is a compact replacement for BreakMaterialAttributes
SetMaterialAttributes - This node is a compact replacement for MakeMaterialAttributes
BlendMaterialAttributes - This is a new node to allow easier blending of Material Attributes structures.
The main improvement for the Get and Set nodes is that pins are optionally added unlike the Break and Make nodes which expose all attributes by default. This allows graphs to avoid the old workflow that required manually connecting every attribute pin. Selecting a node shows the list of current pins in the details panel which can be expanded or removed. For an example, the material function below takes a set of attributes then blends the Base Color and Roughness to a shiny, red surface.
As well as reducing clutter in the graphs, these nodes take advantage of many backend changes to be forward-compatible with any custom material attributes that a project may need to add. Sharing materials between projects is more viable as missing attributes are automatically detected and users given a chance to handle the errors. Any attribute not explicitly listed on a node is passed through with the main Material Attributes pin, including any that are added after the material graph is created. With the Make and Break nodes a new pin would be added and all graphs would need manually updating.
The new Blend node is intended to allow blending of multiple sets of attributes using a mask, a common operation when working with detailed layers of materials. The example below evenly blends Red and Green materials (defined as functions) then has a node that applies a clear-coat to the result:
By default the Blend node performs a linear interpolation (lerp) for all material attributes using the Alpha input. The node has checkboxes to opt-out of blending on a per-vertex/pixel level to allow easier control when using vertex-only or pixel-only mask data. Similarly to the new Get and Set nodes above, the Blend node will automatically handle new attributes being added or removed and allows programmers to specify custom blending behavior when registering attributes.
New: Pre-Skinned Local Position in Materials
Materials now have access to a skeletal mesh&s reference pose position for use in per-vertex outputs. This allows localized effects on an animated character. The node can be shared for static meshes also for which it returns the standard local position. The example graph below creates a grid pattern in local-space which remains relative to the skeletal mesh during animation:
New: Improved Sequencer Shot Import/Export
Movie recording with frame handles per shot. Master sequences can now be rendered with extra frames at the start and end of each shot. These extra frames are cut into and out of by an Edit Decision List (EDL), which can be used in an external video editing package to adjust the cuts between shots.
New: Improved Camera Rig Crane
We&ve tweaked the camera rig crane behavior so that it mimics the movement of a physical crane.
Roll and yaw of the camera crane mount is 0.
Add toggles to lock the mount pitch/yaw for the crane. By default they are not locked so that the camera will stay level with the ground.
New: Sequencer Audio Recording
You can now record audio from a microphone while recording into a sequence.
New: Pose Driver Improvements
The Pose Driver node allows a bone to drive other aspects of animation, based on a set of &example poses&. In this release, it can now drive bone transforms as well as morph targets, for example driving a shoulder pad bone based on arm rotation. We have also added an option to use the translation of the driving bone instead of its orientation. Debug drawing has been improved to show each &target& pose and how close the bone is currently considered to it.
New: Virtual Bones
We&ve added the ability to add &virtual bones& to a skeleton. Virtual bones are not skinnable, but constrained between two existing bones on the skeleton and automatically have data generated for them for each animation on the skeleton. For example, you could add a joint that is a child of a hand, but constrained to a palm joint. Unlike socket, then this joint can be used in Animation Blueprint as a target - i.e. IK target or look at target - or you could modify them in AnimBP for later use.
This helps to improve character iteration time. Previously, if you change your target joint hierarchy for IK or aim, you have to do this in outside of engine, DCC, and import back all the animations to fix the animation data with that new joint included, but this virtual bone will allow you to skip that and do all of work in engine. However this will require recompressing of the animation data to include that joint back to the animation data. To see more practical usage of virtual bones, see &Animation Techniques used in Paragon& for more information. It can be used to make it easier to retarget or change reference frames for controllers and are used for orientation and slope warping in Paragon.
New: Morph Target Debug View Mode
The new Morph Target View Mode makes it easy to see which vertices are affected by each morph target.
New: Child Animation Montages
Create a Child Montage based on a parent Montage, allowing you to replace animation clips, whilst maintaining overall timing. Useful for adding variations to a move whilst guaranteeing it won&t affect gameplay.
New: MIDI Device Plugin
This release contains a new &MIDI Device& plugin for interaction with music hardware
This is a simple MIDI interface that allows you to receive MIDI events from devices connected to your computer. Currently only input is supported. In Blueprints, here's how to use it:
Enable the &MIDI Device& plugin using the Plugins UI, then restart Unreal Editor.
Look for &MIDI Device Manager& in the Blueprint RMB menu.
Call &Find MIDI Devices& to choose your favorite device.
Break the &Found MIDI Device& struct to see what's available.
Then call &Create MIDI Device Controller& for the device you want. Store that in a variable. (It&s really important to store the reference to the object in a variable, otherwise it will be garbage collected and won&t receive events!)
On your MIDI Device Controller, bind your own Event to the &On MIDI Event& event. This will be called every game Tick when there is at least one new MIDI event to receive.
Process the data passed into the Event to make your project do stuff!
New: Landscape Rotation Tool
The landscape mirror tool can now flip the reflected geometry parallel to the mirror plane, to create diagonally-opposed multiplayer maps.
New: Improved Mesh Material Slot Importing
The material workflow has been changed in order to give more control and information on how every material is used static and skeletal meshes and to improve material ordering inconsistencies when reimporting meshes.
Each element in the list is a material slot with the following information
Name of the slot
The name of the slot is used to match up the material on reimport. When a mesh is reimported it looks for this name in the FBX file to determine which sections should match up to existing materials. Previously this relied on index ordering which was easy to become out of order.
Meshes that were imported before this change will have their material slot set to none. Meshes imported after this change will have their material slot set to the imported material name by default.
Material asset reference
The original imported material name (in the tooltip)
In Blueprints and C++ it is now possible to use the material slot name instead of using a hard coded index to retrieve a material slot. Call Set Material By Name to set a dynamic material on a skeletal mesh or static mesh component. Using a name lookup instead of an index ensures gameplay code still works properly if the order of materials on a mesh changes.
New: Platform SDK Upgrades
In every release, we update the engine to support the latest SDK releases from platform partners.
Xbox One: Upgraded to August 2016 QFE 2
Playstation 4: Upgraded to PSR SDK 4.008.061
HTML5: Upgraded to Emscripten 1.36.13
macOS: Now supports 10.12 Sierra, Xcode 8.1
iOS/tvOS: Now supports iOS10/tvOS10, Xcode 8.1
New: Blueprint Library for Mobile Downloading/Patching
The new Mobile Patch Utilities Blueprint library contains all the functionality required to allow a mobile game to download and install game contents and patches from a cloud website instead of being distributed as part of the initial download from the App Store.
There is functionality to determine if updated game content is available, initiate the download, track progress, handle any errors and finally install the content paks that are downloaded successfully. Functionality to check for sufficient storage space and WiFi connectivity is also available, so the blueprint can warn the user in such cases. Both Android and iOS are supported.
New: Amazon GameCircle Plugin for Kindle Fire
A new Online Subsystem GameCircle plugin is now included!
GameCircle Achievements, Leaderboards, and Friends is supports as well as Amazon In-App Purchases. Enabling the plugin will provide access to a new Amazon GameCircle project settings panel under the Plugins category. Changes to the AndroidManifest.xml for Fire TV may be enabled here.
New: Live GPU Profiler
UE 4.14 includes a real-time GPU profiler which provides per-frame stats for the major rendering categories. To use it, enter the console command &stat gpu.& You can also bring these up in the editor via the &Stat& submenu in the Viewport Options dropdown.
The stats are cumulative and non-hierarchical, so you can see the major categories without having to dig down through a tree of events. For example, shadow projection is the sum of all the shadow projections for all lights (across all the views).
The on-screen GPU stats provide a simple visual breakdown of the GPU load when your title is running. They are also useful to measure the impact of cha for example when changing console variables, modifying materials in the editor or modifying and recompiling shaders on the fly (with recompile shaders changed).
The GPU stats can be recorded to a file when the title is running for analysis later. As with existing stats, you can use the console commands &stat startfile& and &stat stopfile& to record the stats to a ue4stats file, and then visualize them by opening the file in the Unreal Frontend tool.
Profiling the GPU with UnrealFrontend. Total, postprocessing and basepass times are shown
Stats are declared in code as float counters, e.g:
DECLARE_FLOAT_COUNTER_STAT(TEXT(&Postprocessing&), Stat_GPU_Postprocessing, STATGROUP_GPU);
Code blocks on the rendering thread can then be instrumented with SCOPED_GPU_STAT macros which reference those stat names. These work similarly to SCOPED_DRAW_EVENT. For example:
SCOPED_GPU_STAT(RHICmdList, Stat_GPU_Postprocessing);
GPU work that isn&t explicitly instrumented will be included in a catch-all [unaccounted] stat. If that gets too high, it indicates that some additional SCOPED_GPU_STAT events are needed to account for the missing work. It&s worth noting that unlike the draw events, GPU stats are cumulative. You can add multiple entries for the same stat and these are aggregated across the frame.
Certain CPU-bound cases the GPU timings can be affected by CPU bottlenecks (bubbles) where the GPU is waiting for the CPU to catch up, so please consider that if you see unexpected results in cases where draw thread time is high. On PlayStation 4 we correct those bubbles by excluding the time between command list submissions from the timings. In future releases we will be extending that functionality to other modern rendering APIs.
New: Improved Merge Actor Texture Atlas (experimental)
We&ve improved how texture space is utilized when merging actors together and combining materials, by introducing a new option to generate a weighted (binned) atlas texture, instead of having a atlas-texture in which each material is equally weighted.
(Left: Equal-weighted materials. Right: Binned method) The new functionality first calculates the importance of an individual material according to the maximum sized texture it samples. These values are then used to calculate the amount of space the material should occupy and to iteratively add each texture to the atlas-texture. This makes the atlas-texture more representative for the input materials because it takes the original texture data size into account for the resulting space it occupies in the atlas texture.
New: Android Support on Linux
Thanks to pull requests from the community we now have Android support for Linux. CodeWorks for Android from NVIDIA for Linux is the easiest way to set up the NDK and SDK tools needed. In addition, OpenJDK 1.8 set JAVA_HOME to point to your install. Please note Android Vulkan on Linux is not supported at this time.
New: Media Player Editor
You can now drag and drop files from your computer into the Media Player&s viewport, thus allowing you to preview video files without having to create a FileMediaSource asset first
A tab for decoder performance statistics has been added. The output depends on the player plug-in being used for playback
New: VR Multiview Support for Mobile
You can now use the mobile multiview path on supported devices! Mobile multiview is similar to instanced stereo on the desktop, and provides an optimized path for stereo rendering on the CPU.
To use this feature, enable it in your Project Settings, under the VR section. For the feature to work, Android build settings should be set to OpenGL ES2, Mobile HDR should be disabled, and instanced stereo should be disabled. Currently, the feature is compatible with modern Mali-based GPUs. If you package with the feature on, but don't have a compatible GPU, it will be disabled at runtime.
This feature is still considered experimental as we verify compatibility with more features and devices.
Enable Mobile Multiview in your Project Settings menu as seen above, then restart your editor for the changes to take effect.
New: Layer Support for SteamVR and PSVR
Layer support has now been added for both SteamVR and PSVR! It works exactly like it does for the Oculus Rift plugin, using the Stereo Layer component.
New: VR Loading Movies
The engine now supports loading movies on Oculus, GearVR, SteamVR, and PSVR. These run on the rendering thread, and can mask framerate hiccups as you load up your content. To use the splash screen, you can set a texture using the &Set Splash Screen& node, or choose to automatically have it appear when you load a map with the &Enable Auto Loading Splash Screen& node.
New: PSVR Support for Multiple Framerate Targets
We now support native 90Hz to 90Hz reprojection and 120Hz to 120Hz reprojection on the PSVR! This means you can opt in to running at a higher framerate to minimize latency and reprojection artifacts. The engine will limit your framerate to your selected option, but it&s still your responsibility to make sure you consistently maintain that framerate!
New: Multitouch Support in Windows
Touch events will now be generated in Windows 7, 8, and 10 when using a touch screen. This will enable touch enabled games and experiences on new Windows tablets and also enables testing touch controls for mobile games without having to deploy to a target device.
New: Fast Asynchronous Loading System (experimental)
Cooked builds can now use a completely new Event Driven Loader which is far more efficient than the old streaming code. Games using the EDL should see the load times drop by about 50%, but in many cases it may be dramatically faster. The Event Driven Loader comes with an unified code path for loading assets. This means that all packages will be loaded using the new async path instead of the old blocking path. EDL is currently an experimental feature and disabled by default, but can easily be enabled through Project Settings.
New: Simplified Game Mode and Game State Classes
We&ve added the new Game Mode Base and Game State Base classes as parents of existing classes Game Mode and Game State. Core features needed by all games are now in the Base classes, while legacy and shooter-specific features are in the Game Mode/Game State. Newly-started projects will inherit from the Base classes, while existing projects will default to using their legacy counterparts. Additionally, some new functions have been exposed to Blueprints.
This change is part of an ongoing effort to update older gameplay classes to be easier to understand and subclass for projects of all types. All of the samples other than Shooter Game have been updated to use the Base classes, while Shooter Game shows how to use some of the more shooter-specific features in the legacy classes. Game Mode/Game State will continue to be supported, and you should subclass the version which is more appropriate for your game.
New: Faster Network Replication
An internal re-factor was done to how we replicate properties from the server to connected clients.
We have modified the code to more efficiently share property replication work across many connections. Before this change was made, we used to only share the work of unconditional properties (properties registered with DOREPLIFETIME rather than DOREPLIFETIME_CONDITION). We now share the work for all types of properties. What this means, is the work we do to check when properties have changed (and need to be sent), happens way less often.
This has shown to have an increase internally by as much as 40%. We have more work to do here, but wanted to share this good news and progress with you in the meantime!
Release Notes
New: Added an &Enable AI System& flag to the World Settings that controls whether the AI System will be created for the given map.
Bugfix: Fixed an issue with AI Sense Sight&s time slicing unintentionally skipping queue-aging if the given time limit was reached.
Bugfix: Fixed an issue basing whether the AI should turn or not by comparing the current desired rotation vs the last update's desired rotation. The AI should be turning based on whether or not the current desired rotation is different from the rotation of the Pawn.
Improvements for the consistency of loudness usage in AI Sense Hearing.
Minor cosmetic fix to EQS test scoring function&s preview.
Selecting Geometry trace mode will no longer project onto the Nav Mesh first in the Projected Points EQS generators.
Behavior Tree
New: Expanded the BT Decorator Is At Location function with an option to use AI Data Provider.
New: Multi-node comments are now supported in the Behavior Tree editor. This is similar functionality to the Blueprint editor.
Bugfix: Fixed an issue with BT Decorator Compare BB Entries function so that it works regardless of the order of the supplied BB keys.
Bugfix: Blackboard key synchronization mechanics correctly handle more edge cases and complex inheritance scenarios.
Bugfix: Fixed an issue in propagation of persistent Blackboard keys resulting in these keys missing or being doubled in child Blackboard assets.
Bugfix: Fixed an issue for derived Blackboard sometimes getting post-load mechanics run before their parents.
Navigation
Bugfix: Aborting previous movement requests as part of requesting a new one no longer resets the agent&s current velocity.
Bugfix: Blueprint &AI Move To& function no longer uses the default AI Controller&s navigation filter.
Bugfix: Fixed navigation export for landscape not handling mirroring and async gathering correctly.
Bugfix: Fixed an issue with navigation export for the capsule component. &
Bugfix: Navlink&s &Snap to cheapest area& mode now works correctly with dynamic navmeshes.
Bugfix: Navmesh generation no longer gets stuck in an infinite loop.
Bugfix: Fixed an issue with parameters of vehicle RVO avoidance.
Bugfix: Fixed an issue with Path Following Component for &Has Reached& not using the Goal&s Radius when &Use Nav Agent Goal Location& is set to false. When &Use Nav Agent Goal Location& is set to true, we want to avoid using the Goal&s location on the navmesh, but instead the acceptability of a radius should be based on the Goal&s radius.
Path Following Component distinguishes between partial and full paths in terms of acceptance radius used. This will make partial paths behave as expected with non-trivial big acceptance radii.
Layer limit for navmesh generation has been increased.
Update Move Focus will only clear the focus if the path following component is idle. This keeps the AI rotated in the correct direction when movements get paused.
New: Added &Strip Animation Data On Dedicated Server& option to Animation Settings. This will remove all compressed data from cooked server data. Disabled by Default.
New: Added &Aim Offset Look At& node. Aim Offset node that drives its inputs automatically from a Target Location and a Source Socket.
New: Added Conversion settings to the Alembic import process. This enables the following:
Inverting UV Channels.
Applying a conversion matrix to the Alembic File.
Presets for Autodesk Maya and 3DS Max.
New: Added &LegIK& Anim node, which supports two or more bones per limb with min compression rotational constraint.
New: Added logic to separate overlapping notifies onto separate tracks. Notifies can no longer end up hidden from the user by occupying the same time on the same track.
New: Added the option to copy selected Morph Target names to clipboard.
New: Added a start time to Montage Play node and Play Slot Animation as Dynamic Montage.
New: Added the ability to skip empty &preroll& frames when importing an Alembic file and improved the frame information for the Import Options UI.
New: The animation &Sequence Length& property can now be read in Blueprints.
New: Added the ability to open Anim Montage to open asset from the context menu of the segment.
New: Added ability for selected Morph Target to view the modified vertices while in the Persona.
New: Added option to now insert current pose to the Pose Asset with Insert Pose.
New: Added SmartName Deterministic Cooking:
SmartName discards GUID during cooking and DisplayName becomes the identifier during cooking.
The GUID created nondeterministic cooking causing patch size to become huge without any actual data change.
Bugfix: Fixed a crash when attaching slave components with differing bone counts.
Bugfix: Fixed a crash with retargeting Additive Animation Montage.
Bugfix: Fixed a crash when routing a wire through multiple re-route nodes in an Animation Blueprint.
Bugfix: Fixed a crash in &Get Post Curve& when importing a pose to pose asset.
Bugfix: Fixed a crash when retargeting Anim Blueprint with &Allow remapping to existing assets& was enabled.
Bugfix: Fixed a crash when calling &Set Animation Mode& on a component with no Skeletal Mesh.
Bugfix: Fixed an issue when re-creating a skeleton due to a merge conflict it will add all Skeletal Meshes back to the skeleton so that it recreates all hierarchy.
Bugfix: Fixed an issue with invalid bound calculation in Calculate LOD Count.
Bugfix: Ensure the runaway loop counter gets reset when processing parallel animation.
Bugfix: Fixed an issue for fast path struct copy being broken for Vectors.
Bugfix: Fixed an issue for warning triggered by calling &Get Bone Transform& before the master pose component has been registered
Bugfix: Fixed an issue where layered blend per bone odd/even connection counts alternately working/not working
Bugfix: Fixed an issue with offset of the floor mesh in Persona when Auto-Alignment was enabled.
Bugfix: Fixed an issue with pose flickering on LOD change when using &Layered Blend by Bone& node.
Bugfix: Fixed an issue to stop empty states being created when dragging montages into State Machines.
Bugfix: Fixed an issue with Anim Instance &Is Running Parallel Evaluation& check to correctly test if the Skeletal Mesh Component still references the Anim Instance being used.
Bugfix: Fixed an issue with &generated& string appearing for a Skeletal Mesh LOD that was previously auto-generated but had been reimported from an FBX.
Bugfix: Fixed an issue with displaying one more frame for an animation.
For example, for 30 frames, it was displaying 0-30, which is incorrect and should be 0-29.
To avoid confusion, now it doesn&t display key count. It will display the last frame number, so it will look different, but the data is still the same.
Bugfix: Fixed an issue with a number of cases where sub-instance animation instance functions weren&t being correctly passed through the main instance.
Bugfix: Fixed an issue with &auto& check box in the Morph Target panel.
Bugfix: Fixed an issue with functionality of &show uncompressed animation& option in Persona viewports.
Bugfix: Fixed an issue with entries in the Anim Curves tab losing &auto& state when hidden/reshown.
Bugfix: Removed errant asterisks in the tooltips on the &Rotation Multiplier& node.
Bugfix: Fixed incorrect tooltips on the &Hand IK Retargeting& node.
Bugfix: Fixed an issue in montages where the wrong section was updated with changes from a Details panel when clicking on a different section.
Bugfix: Fixed an issue when duplicating an Anim Sequence that has been layer-edited would apply the layer twice.
Bugfix: Fixed an issue where Geometry Cache objects would not be visible in Standalone (packaged) builds.
Bugfix: Fixed an issue with keyboard shortcuts for Pose Ass}

我要回帖

更多关于 fluent dpm模型实例 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信