[Date Prev][Date Next][Thread Prev][Thread Next][Thread Index]
Re: [XaraXtreme-dev] source?
Hi,
> > Now, to add some fun to this we have the problem of targeting. From my
> > development work on ARM, Linux x86 and x64 plus what I've picked up on
> > OS X, there are quite a few differences in what is required for x86 and
> > x64 as well as little and large endian systems. Is it safe to assume
> > that initially, x86 and OS X's current processor will be the target with
> > x64 coming later? Or is the code structured in such a way that it is
> > going to be independent of both endian and processor type?
>
> The problem of targeting is increased to a pretty good challenge
> considering the desire on Apple's part to move everyone to fat
> binaries in Xcode.
Just because Apple want to bloat things to insane levels doesn't mean
anyone else has to. Some of the finest pieces of code I've ever seen has
been done on a Win32 box with the final binary a fraction of the
equivalent on any other platform or vendor - most don't expect that from
Win32 programmers. Me, I'm from the RISC OS school - as much as possible
in as small a space as possible.
> Maybe some of the x86 assembly routines (assuming
> they exist) could be used in an x86 OS X.
Unlikely to work unless it's something like a loop which does some math
and splurges out the result. For anything else, forget it.
> Either way, it's probably
> safer to convert the assembly routines to portable C/C++ to maintain
> some sense of corresponding code between the Linux and OS X versions.
Couldn't agree more with you there. Raw assembler is fantastic for
platform specific material, but move it elsewhere and your asking for
many a nightmare.
> Another wrinkle is the operating systems' individual capabilities as
> far as optimization goes.
Not really. Optimisation is essentially a two stroke process. The first
is to know pretty well how the compiler will factor any given code.
There is a general rule that you should never second guess a compiler -
it will optimise code for you. The problem obviously then goes down to
the compiler. For example templates under gcc 3.1 were oddly optimised
more efficiently than under 3.3.3, but 4.0.3 knocks the living tar out
of both of them. Now, Intel's C++ compiler is (from what I've seen) very
poor with generics, but is very good for maths.
The second stroke is understanding the processor. There is an old
example (often cited) that for (int i = 0; i < 10; ++i) is almost twice
as fast under ARM processors than for (int i = 0; i < 10; i++) - under
x86 there is 7/10ths of nothing between them.
Oh yeah, you also have to be able to analyse through memory and CPU
usage monitors where the sludge code is.
> I'm betting we'll have to shoot for lowest
> common denominator and not go for broke with DirectX/CoreImage/Xrender
> extensions. Of course, I'm not entirely sure what is already handled
> within wxWidgets for these.
wx basically is an overlay for the target platform's UI. It's almost
like an interpreted language. Therefore the Win32 version may well
interface with DirectX whereas the Linux one uses X.
> I think there was already some advanced
> canvas project a while back that focused on optimization.
Don't recall that one.
> But before we get the code, we don't know what optimizations have been
> built in, or if the code is separated into GUI and controlling code.
True. I would hope that there is a level of separation between the
control and interface code. There is nothing more likely to give a
developer a headache when they're not used to UI than having a
hotchpotch of code.
> I'm not too hopeful about it being easy to separate from one platform.
As long as the C/C++ is standards compliant and is not using any insane
library, porting is not that bad. It just gets messy sometimes when
trying to iron out the platform's foibles.
TTFN
Paul
--
"Duirt me leat go raibh me breoite." - T.M.