[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
| Re: [cdi-dev] CDI future - problems and proposal | 
  
  
    
    On 22. 10. 20 14:42, Graeme Rocher
      wrote:
    
    
      
      On Thu, Oct 22, 2020 at 12:09 PM Ladislav Thon <
lthon@xxxxxxxxxx>
        wrote:
        
          
            On 20. 10. 20 11:42, Matej Novotny wrote:
            > * Bean Metadata
            >   You describe the need to come up with new, reflection
            free view of metadata that we could use.
            >   This is something that we already included in the
            draft of extension SPI because we came to a similar
            conclusion; see this[1] link for the model.
            >   I think we should note it there and point to the
            PR/issue so that we improve that instead of creating
            duplicities. We are of course open to suggestions if you see
            the model unfit.
            
            I think there's an intrinsic conflict: on one hand, the
            desire to "do 
            things right", including providing a new metamodel that is
            free from 
            reflection; on the other hand, the desire to break
            compatibility as 
            little as possible.
            
            The Build Compatbile Extension proposal includes a new
            metamodel out of 
            sheer necessity, but whether that needs to be used at
            runtime, I don't 
            know. I do think that a big part of the existing runtime
            model can be 
            implemented in a build-time architecture, and those places
            that require 
            reflection perhaps don't have to be supported in Lite.
            
            In general I think the population of extension writers is a
            lot smaller 
            than the population of CDI users, so I'd be willing to break
            more on the 
            extension front and less on the user-facing front.
          
          
          
          Personally I think there are still times where you may
            need to evaluate the metamodel at runtime and the important
            thing is that the metamodel is consistent between what is
            seen at build time vs what is seen at runtime. An example is
            an interceptor whereby you would want to evaluate the
            annotation model of the intercepted method using the same
            meta model. Having said that I don't think it necessarily
            needs to be the same API, it just needs to be consistent.
         
      
    absolutely. I was more thinking about the API -- but clearly the
      metadata accessible at runtime must be the same as the build-time
      metadata. 
    
    
      
        
          
            > Other notes:
            > 
            > * beans.xml handling
            >   You made beans.xml optional - how do you intend to
            discover beans and how do you know in which of the
            dependencies (JARs), do they reside?
            >   I'd say it's way more practical to instead have
            beans.xml as a marker file (== ignoring its contents)
            denoting that a given archive is to be searched for beans.
            >   That way you can further optimize which archives are
            to be left alone and which are to be taken into account.
            >   Though maybe I am just misunderstanding how you
            intend to perform discovery, could you please elaborate?
            
            Bean discovery is a very interesting topic. If anything, I
            think we 
            agree that we only want the "annotated" mode, and that we
            don't want 
            anything like multiple bean archives :-)
            
            The rest is ... hard. It seems to me that CDI always assumed
            that the 
            container would see the entire world at once (I might be
            wrong about 
            that, but at least all existing implementations do that?).
            But if I 
            understand correctly, with annotation processors, you'd want
            to do bean 
            discovery (and run extensions, and perhaps more) "per
            partes", 
            incrementally. That's an interesting constraint and I'm
            pretty sure that 
            I myself can't see all the consequences.
            
            But requiring people to modify their JARs, otherwise beans
            in those JARs 
            won't be discovered, seems a bit harsh to me. Surely there
            _is_ a point 
            in time where you get to see the whole world? (Maven plugin
            or Gradle 
            plugin, for example.) This would of course require operating
            on bytecode 
            instead of AST, and I can see how you don't want to do that,
            or at least 
            minimize that work...
          
          
          
          To be clear it is possible to materialize a view of the
            whole world inside an annotation processor since annotation
            processors can operate against compiled code, however in
            general that approach has performance consequences with
            regards to compilation speeds and incremental compilation
            since javac has to fuse an AST node from compiled method
            signatures.
          
          
          I am not sure it is needed frankly and this is an area
            where Micronaut and Quarkus differ a great deal in the
            implementation since Micronaut places additional synthetic
            byte code and metadata within JAR files of extensions. These
            are already enhanced and hence don't need to be reprocessed
            at all by a compiler that has a view of the whole world. 
          
          
          In other words when a user consumes an extension it has
            already been compiled with the Micronaut compiler extensions
            and is ready to go and doesn't need to be reprocessed at all
            by a downstream compilation / bytecode enhancement process.
         
       
    
    I hear you, but I'm not sure how feasible such approach is for
      CDI. From the top of my head, I can think of two issues: we'd have
      to agree on a common serialization format (which I'd rather not,
      but is a technical problem and can be solved if need be) and
      require all JARs that could possibly be used in CDI Lite
      applications to be pre-processed by some tool (which, to me, is a
      show-stopper, but I might be missing something).
    Or we could invent some constraints. With CDI, it has always been
      possible to have extension A.jar that integrates framework B.jar
      and "modifies" classes from C.jar as part of that. This is not a
      problem if you have cheap access to the whole world. I guess this
      could be constrained, but if you run extensions incrementally,
      parts of the extension will always be inactive, so it seems very
      hard to detect whether the extension is actually following those
      constraints or not. In other words, high potential for easy to
      miss errors. It also limits the expressive power quite a bit,
      compared to current state, which I'd like to be very careful
      about.
    OK, I don't have answers, only more questions, sorry :-)
    Do you perhaps have a pointer to something I could read about how
      your extensions work? Code is more than enough, because at this
      point, I feel like I need to experiment with something real
      instead of speculating in the abstract.
    
    LT