Jump to content

  •  

New Client


  • This topic is locked This topic is locked
76 replies to this topic

#21 themuntdregger

themuntdregger

    Official Troll

  • Full Member
  • PipPipPip
  • 1001 posts
  • LocationBehind you

Posted 10 June 2014 - 05:28 PM

Some small  messing about with e3d_conv :

a) Repaired a bug in the Windows 32bit build of version 1.1. Unlike Linux (in which fopen defaults to binary), under Windows it does completely the opposite with inevitable chaos.

b ) Released version 1.2 which contains an option to simply report on a file without converting it.

EDIT :

Found a workaround for the problem with Unity messing up obj file textures and materials. Seems Unity requires the y axis of the texture uv to be reversed (as per the e3d) plus, requires additional tags on face groupings. This then converts the object into a series of sub meshes that can accurately assigned to a particular texture.

Will update in version 1.3 and release tommorrow.

#22 Learner

Learner

    God

  • Administrators
  • 2890 posts

Posted 10 June 2014 - 07:36 PM

Might I suggest a flag for the Y reversal of the UV, defaulting to what Unity needs?

#23 themuntdregger

themuntdregger

    Official Troll

  • Full Member
  • PipPipPip
  • 1001 posts
  • LocationBehind you

Posted 15 June 2014 - 11:48 AM

Ok, version 1.3 is now available for upload at https://sourceforge....er/version 1.3/ Should have a windows version within the hour.

This version contains some revised and additional options:

a) e3d_conv [filename] Y - This is the format you'll need if you're g‌oing to use the resulting obj file  in Unity which, requires the y axis of the texture coordinates to be reversed. However, if you intend  to use obj file in a 3d modelling app such as Blender, Misfit or Milkshape, you'll need to leave this option out.

b ) e3d_conv [filename] M - This the format you'll need if you're going to use the resulting obj file  in a 3d modelling app, as it will creates the necessary mtl file that most require in order to map  the required texture files. However, you won't need this option if you're using the file in Unity as is relies only on data contained in the obj file.

c) e3d_conv [filename] S - This is the format you may want if you're doing a batch conversion and want to  turn off the diagnostic data.

d) e3d_conv [filename] R - This is the format you may want if you simply need to know information about the e3d file such as the names of the required texture files or, if you use option 'RV', how much redundant  vertex data is contained within the file.

e) e3d_conv [filename] V - This optimises the data that is written to an obj file to remove the the redundant  vertices that seem to clutter many of EL's e3d files. It will slighty reduce the size of the resulting obj  file and slightly increase the render speed under Unity.

The obj file format now creates a series of sub meshes (using the wavefront obj 'g' tag). This means that  you can now allocate the correct textures to each part of an object under the Unity editor. Alas, Unity  won't do this automatically when you use an obj file as it can't read the required mtl file. However, the names of the submeshes now relate directly to the required texture files making it pretty easy to manually  match the two.

I've also started work on the obj-e3d converter. This will allow the use of simpler more streamlined  3d modelling apps such as Misfit and Milkshape which, should make the process of creating new objects  faster and easier to learn. So far, i've completed functions that collect the necessary vertex, texture  and materials data from an obj file and convert it into the formats requires under e3d. Its now a matter  of converting the face data into an e3d index.

A particular challenge is that the wavefront obj format has a vast variety of different options, many of which aren't supported under e3d. Likewise, the e3d format requires data, eg tangents, which aren't part  of the wavefront obj format. Hence, conversion isn't a straight forward process and, it will be necessary to wrangle the available data to take account of missing/additional information. Because wavefront obj  handles such a wide variety of 3d objects (whereas e3d handles a fairly narrow set) in some cases  conversion just isn't possible. However, in those circumstances, its necessary to give the user decent diagnostic output so as they know exactly why (as opposed to the current Blender macro's that simply  leave you guessing).

To help those who simply want to experiment with altering the textures of the current files, I'm also  going to incorporate an option allowing you to change the texture data that is currently hardcoded into  e3d files.

If anyone can think of any other useful options, let me know.

#24 butler

butler

    Advanced Member

  • Full Member
  • PipPipPip
  • 1432 posts
  • LocationScotland

Posted 16 June 2014 - 10:38 AM

How is animation dealt with for the .e3d models?

#25 Learner

Learner

    God

  • Administrators
  • 2890 posts

Posted 16 June 2014 - 11:09 AM

E3D's don't have any animation. Actors are a different format with mesh, skeletons & animations and a texture overlaid on the mesh.

#26 themuntdregger

themuntdregger

    Official Troll

  • Full Member
  • PipPipPip
  • 1001 posts
  • LocationBehind you

Posted 16 June 2014 - 12:52 PM

Yup, animated stuff uses the cal3d format which, is the animation library developed as part of the Worldforge project. Instead of one single file, you get three; a cmf file that holds the mesh; a caf file that holds the animation and a csf file that holds the skeleton.

If you want to edit an existing EL/OL object, you'll need to manually create a cal file which the 3d modelling package can then use as a target. However, its a text based format and pretty easy to do.

Whilst there's a blender macro and (unusually) its pretty reliable, its far easier to use an app called Milkshape which, can read and write the necessary files natively. Last time I experimented, I imported the grizzly bear attack animation which worked just fine.

If you download the development pack for cal3d it has this excellent little demo called Cally which shows exactly what the library can do. Alas, actually compiling from source is a massive pain if you try and do it in windows or 64bit linux. Ages back I thing I think I posted something about which version of the lib works best with EL/OL (some don't);

I'm not sure if Unity supports cal3d so, its likely that we have a challenge ahead trying to convert the existing stuff. At a pinch, I guess we might use fbx as it seems to be Unity's preferred import medium and carries animation data. However, we'd be reliant on blender's fbx export which, is an unknown quantity as far as dynamic content is concerned.

#27 themuntdregger

themuntdregger

    Official Troll

  • Full Member
  • PipPipPip
  • 1001 posts
  • LocationBehind you

Posted 17 June 2014 - 03:11 AM

Version 1.4.1 of e3d_conv now uploaded to sourceforge https://sourceforge..../version 1.4.1/ together with Linux  and Windows 32bit executables.

New version has cool new command line options and multiple file handling by Learner.

#28 Learner

Learner

    God

  • Administrators
  • 2890 posts

Posted 17 June 2014 - 07:06 AM

FYI, we are trying to do a basic proof of concept test with .ELM's and converted E3D's. Once it's looking reasonable then we'll update everyone on the details and make something available as we move forward. Need to take care of some inital major bugs first.

P.S. This isn't going to slow magic down. I have others doing the heavy lifting. I'm coordinating and doing the mass conversions.

#29 themuntdregger

themuntdregger

    Official Troll

  • Full Member
  • PipPipPip
  • 1001 posts
  • LocationBehind you

Posted 25 July 2014 - 07:10 PM

View Postthemuntdregger, on 12 May 2014 - 07:18 PM, said:

... i'll try writing a standalone utility to convert obj to e3d (and hopefully vice versa)

Well, it turned out better to write the e3d to obj converter first (e3d_conv). Since then, i've been working on on an obj to e3d converter (obj_conv).

First draft of the code is written and is currently going through bug testing. Much of the code is experimental involving techniques and functions that i've not used before hence, actually getting it to work has been a bit of a trial and, i'm not there yet. However, one by one, i've been dealing with the problems and, am hopefully getting close to the point where it will actually convert something, instead of crashing.

Whilst you might imagine that it should be no more difficult than writing the e3d to obj converter, in fact, its very much more difficult. That's because binary formats such as e3d are more problematic to write and debug than text formats such as obj. All it takes is one misplaced byte (amongst 10's of thousands) and the whole file is corrupted. Finding which byte is wrong is no simple matter as the e3d format uses dynamic hashes that change depending on multiple flag settings plus non-standard data types such as half floats which aren't natively supported by the compiler. Add to that the fact that the e3d format comes in three different versions and you begin to see the challenges involved.

Atm, the current code writes an older version of the e3d format thats simpler than the later versions and therefore easier to debug and get working. The idea is therefore to use this to prove the basic structure of the code then, start to build in handling of the more complex elements.  Once we get to the point where the code produces a file which can be read by Blender or the EL map editor, the major part of the challenge will have been cracked.

Not sure when that's likely to be. However, its a bit closer since the latest problem has been solved. Hopefully i'll be in a position to post some initial screenies soon.

#30 themuntdregger

themuntdregger

    Official Troll

  • Full Member
  • PipPipPip
  • 1001 posts
  • LocationBehind you

Posted 28 July 2014 - 08:16 PM

Well, progress is still slowly continuing on the obj-e3d converter, at least i'm at the stage where the binary hashes all seem to be in the right format and order, although the materials hash data still needs some wrangling. Hopefully, this should be the last hurdle before the initial beta is ready. Regretfully, the code ain't pretty, so there's a lot refinement that will need to take place.

I've also been planning some further changes to e3d_conv, including more error trapping code and, a diagnostic option that dumps the entire e3d file in human readable form.  All that should appear in version 1.7

#31 themuntdregger

themuntdregger

    Official Troll

  • Full Member
  • PipPipPip
  • 1001 posts
  • LocationBehind you

Posted 03 August 2014 - 05:18 PM

Version 1.7 of e3d_conv is now released together with the diagnostic option mentioned in the last post.

Hit some issues with obj_conv which required a fundamental rethink of how to write the e3d files. Am now using structs rather than a single large string of unsigned chars which, seems to be simpler and more reliable. Whilst that's meant dropping the ability to support all options available under the e3d format, none of the unsupported options are used in the current object files in any case.

#32 themuntdregger

themuntdregger

    Official Troll

  • Full Member
  • PipPipPip
  • 1001 posts
  • LocationBehind you

Posted 04 August 2014 - 07:41 PM

Well, here's the first tentative screen shot of the EL house which has been converted to wavefront obj using e3d_conv and then converted back to eternal lands e3d using the new obj_conv tool...

Posted Image

Nope, doesn't look too good, does it ? However, that rather overlooks the breakthrough that it works at all as, previously, it wasn't possible to get any kind of image (even a crappy broken one). Hence, at least some small progress has been made.

Should also mentioned that i've posted version 1.7.1 of e3d_conv which, has fixed some obvious bugs in the last release.

#33 Fire

Fire

    Advanced Member

  • Full Member
  • PipPipPip
  • 758 posts
  • LocationPoland,Europe

Posted 05 August 2014 - 02:29 AM

What is that conv used for?

#34 themuntdregger

themuntdregger

    Official Troll

  • Full Member
  • PipPipPip
  • 1001 posts
  • LocationBehind you

Posted 05 August 2014 - 05:04 AM

The e3d_conv tool converts 3d object files created in EL's e3d format to an open format (Wavefront obj) where they can be easily edited. The only way to do this previously was to use EL's macro pack which was unreliable and only worked with an ancient version of Blender. The obj_conv tool does the opposite of e3d_conv and converts Wavefront obj files to EL's e3d format. At least it will when i've finished it lol.

Together, the tools will provide an easy convenient and reliable toolchain which will allow peeps to modify existing EL 3d objects or, create new ones. That's something that's pretty much impossible atm due to the unreliable nature of the EL macro's and difficulties of working with a crappy ancient version of Blender.

The tools do a little more than just convert files. They also optimise the file data so that files are smaller and load faster; they contain diagnostics to help texture editing plus, I'm also working on an option for them to recalculate normal and tangent data so that objects look better. Oh yes, and there will also be an option to enable peeps to easily replace texture files without fiddling with 3d editors.

Any other ideas, let me know and I'll try and incorporate them.

#35 themuntdregger

themuntdregger

    Official Troll

  • Full Member
  • PipPipPip
  • 1001 posts
  • LocationBehind you

Posted 05 August 2014 - 04:02 PM

Ok, here's the latest conversion viewed from the Map Editor. As before, the original EL e3d file has been converted to wavefront obj then, the obj_conv tool has been used to turn it back into the e3d that you see here...

Posted Image

Fair to say, we now have a basic working prototype for obj_conv so, i'll be posting version 1.0 shortly on the UnoffLandz Sourceforge site. What now needs to be developed is the following :

a) tangent data calcs (tangents are presently set to zero)
b )recalculation of normals data (so as curved objects look better)
c) option to replace existing texture files (without having to use a 3d modelling App)
d) combine e3d_conv and obj_conv into one tool that does everything
e) experiment with some of the weird e3d options (colour, extra uv) that are no longer used but, might be useful when creating new objects
f) option to optimise existing EL e3d files so they load quicker and have less seams and visual artifacts.
g) think of snappy new name for this tool.

EDIT - prototype now posted at https://sourceforge..../obj converter/

#36 themuntdregger

themuntdregger

    Official Troll

  • Full Member
  • PipPipPip
  • 1001 posts
  • LocationBehind you

Posted 10 August 2014 - 07:52 PM

Took a bit of a rest from obj_conv and spent some time revising the e3d_conv codebase. I'll confess that's largely just to make the code look pretty or, at least, prettier than it was. I also wanted to incorporate some of the techniques I learned whilst developing obj_conv, particularly the use of structs to simplify working with binary data.  As part of the above exercise, I took the opportunity to revise the command line options that can be used to produce diagnostic output. We now have the option of piping output to either the screen or a text file plus, header, vertex, index, material and dds outputs are now individually selectable allowing the diagnostic output to be customised (new version to be posted in the next few days).

Learner has also come up with interesting challenge for me which, is to find a better way to check for alpha in the dds texture files. Atm, we do this the same way as the EL/OL client which, is simply by checking the texture format. These come in lots of different flavours, however, the ones commonly found in the EL/OL files are DXT1, DXT3 and DXT5. The purpose of the texture formats is to specify the colour frequency supported in the data and the type of compression used to reduce its size. Whilst DXT3/5 supports a  higher colour frequency and, provides better alpha than DXT1, the latter has the advantage of a much smaller file size and is faster for the EL/OL client to load.

However, in practice, the EL/OL client only really requires DXT 3/5 where an image has alpha. In most other circumstances this creates an unnecessary performance hit. Determining whether an image has alpha ought to be an easy matter as, the dds standard provides for a series of header flags which are there specifically to tell you this. However, seems that the graphics app that was used to create the EL/OL files may not have been fully compliant with the required standard and, failed to set the necessary flags. I suspect that its for this reason that the EL/OL client determines if an image has alpha based simply on its texture format and, assumes that files with DXT1 have no alpha and, those with DXT3/5 have alpha.

Problem is that may not always be correct. However, in the absence of reliable header flags, the only other solution is to decompress the actual texture data and test it to establish if the alpha channel is being used. Anyhow, that's what i'm currently working on.

#37 themuntdregger

themuntdregger

    Official Troll

  • Full Member
  • PipPipPip
  • 1001 posts
  • LocationBehind you

Posted 13 August 2014 - 07:16 PM

Still pretty much at the research stage of designing the 'alpha searcher'. However, have at least managed to work out how to unpick the dds data sufficiently to isolate the alpha element.

The data is held in a 4x4 blocks of 128 bits with the first 64bits of each block being used to carry the alpha data. Alas its all in 4bit values which require some mangling to make them useable. Once that's done, it should then be a matter of scanning the data for any files that have all the alpha set to white. Interestingly, there's no need to scan the whole file. DDS files are comprised of a series of copies of the same image (known as mipmaps), each at a lower resolution than the last. Hence, scanning the first mipmap tells you everything you want to know.

Atm, i've got as far as locating the various mipmaps but, have yet to start testing the individual blocks. Just to test out my mipmap calcs, I coded a  small dds image viewer using the glut library which, could be a useful additional option for e3d_conv.

#38 Fire

Fire

    Advanced Member

  • Full Member
  • PipPipPip
  • 758 posts
  • LocationPoland,Europe

Posted 14 August 2014 - 02:19 AM

Themunt would that be possible to increase max. height  that we can step on? currently its 4:00 only.Creating some mountains in map editor its pointless and looks shit.Visible max range for all items seems to be like 50 even, and that is pretty much.That would help in creating whole contintent because current way how to separate each maps is shit with these "mountains".

#39 themuntdregger

themuntdregger

    Official Troll

  • Full Member
  • PipPipPip
  • 1001 posts
  • LocationBehind you

Posted 14 August 2014 - 04:16 AM

Yup. I agree Fire. Would be good to have a way of making mountains look more mountainous.

However, if we use large 3d meshes to create them and, make them visible from further away, we're likely to introduce massive lag.  Such mountains would also be a bitch to texture correctly. Use too few polygons and they will look crap but, with more polygons, there's a risk of massive lag.

The best way to do it would probably be with what are called voxels. However, creating terrain shapes with them is a very different business to the tile system currently used by the client. We'd therefore need a completely new map editor, map file system, ground texture system etc etc etc. The good news is that the Unity engine (which Stardark is using to create the new mapmaker) handles voxels really well. It's therefore possible that a Unity client could be developed that uses voxel based terrain.

I guess the starting point for that would be either to rework the current elm file format so that we have a framework on which to build/convert maps. This could then be used as a basis for creating a 'proof of concept' using a game engine such as Unity to see how it works in practice. That could be quite an interesting project and, I might give it a try once i've finished the alpha searcher.

#40 Learner

Learner

    God

  • Administrators
  • 2890 posts

Posted 14 August 2014 - 06:56 AM

Both the client and server can't handle a larger range of heights for walkable tiles. Both of them use one or more bits (server uses multiple bits, client I think is one bit) in the height map current in pathfinding logic for special purposes, limiting how many different heights can be used to the current range without revamping both along with the map editor would need to know the new limits and/or changing the map format.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users