Posted 10 June 2014 - 05:28 PM
a) Repaired a bug in the Windows 32bit build of version 1.1. Unlike Linux (in which fopen defaults to binary), under Windows it does completely the opposite with inevitable chaos.
b ) Released version 1.2 which contains an option to simply report on a file without converting it.
Found a workaround for the problem with Unity messing up obj file textures and materials. Seems Unity requires the y axis of the texture uv to be reversed (as per the e3d) plus, requires additional tags on face groupings. This then converts the object into a series of sub meshes that can accurately assigned to a particular texture.
Will update in version 1.3 and release tommorrow.
Posted 10 June 2014 - 07:36 PM
Posted 15 June 2014 - 11:48 AM
This version contains some revised and additional options:
a) e3d_conv [filename] Y - This is the format you'll need if you're gâ€Œoing to use the resulting obj file in Unity which, requires the y axis of the texture coordinates to be reversed. However, if you intend to use obj file in a 3d modelling app such as Blender, Misfit or Milkshape, you'll need to leave this option out.
b ) e3d_conv [filename] M - This the format you'll need if you're going to use the resulting obj file in a 3d modelling app, as it will creates the necessary mtl file that most require in order to map the required texture files. However, you won't need this option if you're using the file in Unity as is relies only on data contained in the obj file.
c) e3d_conv [filename] S - This is the format you may want if you're doing a batch conversion and want to turn off the diagnostic data.
d) e3d_conv [filename] R - This is the format you may want if you simply need to know information about the e3d file such as the names of the required texture files or, if you use option 'RV', how much redundant vertex data is contained within the file.
e) e3d_conv [filename] V - This optimises the data that is written to an obj file to remove the the redundant vertices that seem to clutter many of EL's e3d files. It will slighty reduce the size of the resulting obj file and slightly increase the render speed under Unity.
The obj file format now creates a series of sub meshes (using the wavefront obj 'g' tag). This means that you can now allocate the correct textures to each part of an object under the Unity editor. Alas, Unity won't do this automatically when you use an obj file as it can't read the required mtl file. However, the names of the submeshes now relate directly to the required texture files making it pretty easy to manually match the two.
I've also started work on the obj-e3d converter. This will allow the use of simpler more streamlined 3d modelling apps such as Misfit and Milkshape which, should make the process of creating new objects faster and easier to learn. So far, i've completed functions that collect the necessary vertex, texture and materials data from an obj file and convert it into the formats requires under e3d. Its now a matter of converting the face data into an e3d index.
A particular challenge is that the wavefront obj format has a vast variety of different options, many of which aren't supported under e3d. Likewise, the e3d format requires data, eg tangents, which aren't part of the wavefront obj format. Hence, conversion isn't a straight forward process and, it will be necessary to wrangle the available data to take account of missing/additional information. Because wavefront obj handles such a wide variety of 3d objects (whereas e3d handles a fairly narrow set) in some cases conversion just isn't possible. However, in those circumstances, its necessary to give the user decent diagnostic output so as they know exactly why (as opposed to the current Blender macro's that simply leave you guessing).
To help those who simply want to experiment with altering the textures of the current files, I'm also going to incorporate an option allowing you to change the texture data that is currently hardcoded into e3d files.
If anyone can think of any other useful options, let me know.
Posted 16 June 2014 - 11:09 AM
Posted 16 June 2014 - 12:52 PM
If you want to edit an existing EL/OL object, you'll need to manually create a cal file which the 3d modelling package can then use as a target. However, its a text based format and pretty easy to do.
Whilst there's a blender macro and (unusually) its pretty reliable, its far easier to use an app called Milkshape which, can read and write the necessary files natively. Last time I experimented, I imported the grizzly bear attack animation which worked just fine.
If you download the development pack for cal3d it has this excellent little demo called Cally which shows exactly what the library can do. Alas, actually compiling from source is a massive pain if you try and do it in windows or 64bit linux. Ages back I thing I think I posted something about which version of the lib works best with EL/OL (some don't);
I'm not sure if Unity supports cal3d so, its likely that we have a challenge ahead trying to convert the existing stuff. At a pinch, I guess we might use fbx as it seems to be Unity's preferred import medium and carries animation data. However, we'd be reliant on blender's fbx export which, is an unknown quantity as far as dynamic content is concerned.
Posted 17 June 2014 - 07:06 AM
P.S. This isn't going to slow magic down. I have others doing the heavy lifting. I'm coordinating and doing the mass conversions.
Posted 25 July 2014 - 07:10 PM
Well, it turned out better to write the e3d to obj converter first (e3d_conv). Since then, i've been working on on an obj to e3d converter (obj_conv).
First draft of the code is written and is currently going through bug testing. Much of the code is experimental involving techniques and functions that i've not used before hence, actually getting it to work has been a bit of a trial and, i'm not there yet. However, one by one, i've been dealing with the problems and, am hopefully getting close to the point where it will actually convert something, instead of crashing.
Whilst you might imagine that it should be no more difficult than writing the e3d to obj converter, in fact, its very much more difficult. That's because binary formats such as e3d are more problematic to write and debug than text formats such as obj. All it takes is one misplaced byte (amongst 10's of thousands) and the whole file is corrupted. Finding which byte is wrong is no simple matter as the e3d format uses dynamic hashes that change depending on multiple flag settings plus non-standard data types such as half floats which aren't natively supported by the compiler. Add to that the fact that the e3d format comes in three different versions and you begin to see the challenges involved.
Atm, the current code writes an older version of the e3d format thats simpler than the later versions and therefore easier to debug and get working. The idea is therefore to use this to prove the basic structure of the code then, start to build in handling of the more complex elements. Once we get to the point where the code produces a file which can be read by Blender or the EL map editor, the major part of the challenge will have been cracked.
Not sure when that's likely to be. However, its a bit closer since the latest problem has been solved. Hopefully i'll be in a position to post some initial screenies soon.
Posted 28 July 2014 - 08:16 PM
I've also been planning some further changes to e3d_conv, including more error trapping code and, a diagnostic option that dumps the entire e3d file in human readable form. All that should appear in version 1.7
Posted 03 August 2014 - 05:18 PM
Hit some issues with obj_conv which required a fundamental rethink of how to write the e3d files. Am now using structs rather than a single large string of unsigned chars which, seems to be simpler and more reliable. Whilst that's meant dropping the ability to support all options available under the e3d format, none of the unsupported options are used in the current object files in any case.
Posted 04 August 2014 - 07:41 PM
Nope, doesn't look too good, does it ? However, that rather overlooks the breakthrough that it works at all as, previously, it wasn't possible to get any kind of image (even a crappy broken one). Hence, at least some small progress has been made.
Should also mentioned that i've posted version 1.7.1 of e3d_conv which, has fixed some obvious bugs in the last release.
Posted 05 August 2014 - 05:04 AM
Together, the tools will provide an easy convenient and reliable toolchain which will allow peeps to modify existing EL 3d objects or, create new ones. That's something that's pretty much impossible atm due to the unreliable nature of the EL macro's and difficulties of working with a crappy ancient version of Blender.
The tools do a little more than just convert files. They also optimise the file data so that files are smaller and load faster; they contain diagnostics to help texture editing plus, I'm also working on an option for them to recalculate normal and tangent data so that objects look better. Oh yes, and there will also be an option to enable peeps to easily replace texture files without fiddling with 3d editors.
Any other ideas, let me know and I'll try and incorporate them.
Posted 05 August 2014 - 04:02 PM
Fair to say, we now have a basic working prototype for obj_conv so, i'll be posting version 1.0 shortly on the UnoffLandz Sourceforge site. What now needs to be developed is the following :
a) tangent data calcs (tangents are presently set to zero)
b )recalculation of normals data (so as curved objects look better)
c) option to replace existing texture files (without having to use a 3d modelling App)
d) combine e3d_conv and obj_conv into one tool that does everything
e) experiment with some of the weird e3d options (colour, extra uv) that are no longer used but, might be useful when creating new objects
f) option to optimise existing EL e3d files so they load quicker and have less seams and visual artifacts.
g) think of snappy new name for this tool.
EDIT - prototype now posted at https://sourceforge..../obj converter/
Posted 10 August 2014 - 07:52 PM
Learner has also come up with interesting challenge for me which, is to find a better way to check for alpha in the dds texture files. Atm, we do this the same way as the EL/OL client which, is simply by checking the texture format. These come in lots of different flavours, however, the ones commonly found in the EL/OL files are DXT1, DXT3 and DXT5. The purpose of the texture formats is to specify the colour frequency supported in the data and the type of compression used to reduce its size. Whilst DXT3/5 supports a higher colour frequency and, provides better alpha than DXT1, the latter has the advantage of a much smaller file size and is faster for the EL/OL client to load.
However, in practice, the EL/OL client only really requires DXT 3/5 where an image has alpha. In most other circumstances this creates an unnecessary performance hit. Determining whether an image has alpha ought to be an easy matter as, the dds standard provides for a series of header flags which are there specifically to tell you this. However, seems that the graphics app that was used to create the EL/OL files may not have been fully compliant with the required standard and, failed to set the necessary flags. I suspect that its for this reason that the EL/OL client determines if an image has alpha based simply on its texture format and, assumes that files with DXT1 have no alpha and, those with DXT3/5 have alpha.
Problem is that may not always be correct. However, in the absence of reliable header flags, the only other solution is to decompress the actual texture data and test it to establish if the alpha channel is being used. Anyhow, that's what i'm currently working on.
Posted 13 August 2014 - 07:16 PM
The data is held in a 4x4 blocks of 128 bits with the first 64bits of each block being used to carry the alpha data. Alas its all in 4bit values which require some mangling to make them useable. Once that's done, it should then be a matter of scanning the data for any files that have all the alpha set to white. Interestingly, there's no need to scan the whole file. DDS files are comprised of a series of copies of the same image (known as mipmaps), each at a lower resolution than the last. Hence, scanning the first mipmap tells you everything you want to know.
Atm, i've got as far as locating the various mipmaps but, have yet to start testing the individual blocks. Just to test out my mipmap calcs, I coded a small dds image viewer using the glut library which, could be a useful additional option for e3d_conv.
Posted 14 August 2014 - 02:19 AM
Posted 14 August 2014 - 04:16 AM
However, if we use large 3d meshes to create them and, make them visible from further away, we're likely to introduce massive lag. Such mountains would also be a bitch to texture correctly. Use too few polygons and they will look crap but, with more polygons, there's a risk of massive lag.
The best way to do it would probably be with what are called voxels. However, creating terrain shapes with them is a very different business to the tile system currently used by the client. We'd therefore need a completely new map editor, map file system, ground texture system etc etc etc. The good news is that the Unity engine (which Stardark is using to create the new mapmaker) handles voxels really well. It's therefore possible that a Unity client could be developed that uses voxel based terrain.
I guess the starting point for that would be either to rework the current elm file format so that we have a framework on which to build/convert maps. This could then be used as a basis for creating a 'proof of concept' using a game engine such as Unity to see how it works in practice. That could be quite an interesting project and, I might give it a try once i've finished the alpha searcher.
Posted 14 August 2014 - 06:56 AM
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users